Physical Therapy Software

MoveMate

Track Better, Heal Faster

MoveMate is a lightweight telehealth and exercise-tracking app that uses smartphone computer-vision to automatically count reps and flag form errors. Designed for solo and small-clinic physical therapists, it increases home-exercise adherence, accelerates recovery, and reduces clinician time per patient with concise dashboards, automatic rep totals, and timely patient nudges.

Subscribe to get amazing product ideas like this one delivered daily to your inbox!

MoveMate

Product Details

Explore this AI-generated product idea in detail. Each aspect has been thoughtfully created to inspire your next venture.

Vision & Mission

Vision
Empower every therapist to deliver clinic-grade, personalized home rehabilitation that boosts patient adherence, accelerates recovery, and restores mobility.
Long Term Goal
Within 5 years, empower 25,000 active therapists across 5,000 clinics to boost patient home-exercise adherence by 30%, accelerating recoveries and reducing clinician time per patient.
Impact
For solo and small-clinic physical therapists, MoveMate increases patient adherence to home exercise programs by 35%, reduces missed follow-ups by 40% and cuts clinician time per patient by 20% within 12 weeks, enabling therapists to measure reps, correct form, and accelerate recoveries.

Problem & Solution

Problem Statement
Solo and small-clinic physical therapists struggle to ensure patients perform prescribed home exercises correctly and consistently because existing telehealth tools are clunky, generic, and lack reliable automated rep-counting, simple clinician workflows, and actionable adherence data.
Solution Overview
MoveMate pairs smartphone computer-vision rep-counting with a concise clinician dashboard so therapists see automatic rep totals, flagged form errors, and clear adherence summaries, enabling quick remote coaching and reliable measurement of home exercise performance.

Details & Audience

Description
MoveMate is a lightweight telehealth and exercise-tracking SaaS that lets therapists prescribe, monitor, and coach home exercise programs. It targets solo therapists and small clinics seeking efficient remote care. It boosts patient adherence, accelerates recovery, and cuts clinician time per patient by giving clear progress summaries and nudges. Distinctive smartphone computer-vision rep-counting provides automatic rep counts and simple form feedback.
Target Audience
Solo and small-clinic physical therapists (30-60) aiming to boost remote adherence who prioritize efficient workflows
Inspiration
During a late afternoon televisit, a therapist watched a middle-aged patient flail through the same shoulder raise, grimacing after each rep while the clinician had no way to count, flag form errors, or nudge progress between appointments. That helplessness—seeing recovery stall off-camera—inspired MoveMate: a simple phone-camera rep counter and crisp clinician summaries so therapists can catch errors, measure reps, and celebrate small wins remotely.

User Personas

Detailed profiles of the target users who would benefit most from this product.

C

Caregiver Coach Carla

- 38, sandwich-generation caregiver for aging parent - Works part-time retail; flexible daytime hours - Suburban household; shares duties with siblings - Comfortable with smartphones; prefers visual-first apps

Background

Managed her dad’s cardiac rehab checklists after a scare, learning to translate medical instructions into doable steps. Now supports his knee-rehab protocol, driven to avoid rehospitalization and preserve independence.

Needs & Pain Points

Needs

1. Caregiver view with progress, flags, clear next steps 2. Short coaching tips tied to form errors 3. Shareable updates for family and therapist

Pain Points

1. Confusing instructions across multiple apps 2. Missed alerts leading to skipped sessions 3. Uncertainty if exercise was done correctly

Psychographics

- Pragmatic problem-solver, celebrates small daily wins - Values clinician clarity over medical jargon - Motivated by keeping loved one independent - Prefers checklists, visuals, and timely nudges

Channels

1. WhatsApp family thread 2. YouTube rehab tutorials 3. Facebook caregiver groups 4. Email appointment reminders 5. SMS urgent alerts

R

Roving PT Riley

- 31, mobile PT serving rural counties - Drives 150+ miles weekly; spotty connectivity - DPT degree; independent contractor - Revenue tied to efficiency and documentation

Background

Started in outpatient, shifted to home visits after seeing access gaps. Built a trunk clinic—therabands, tripod, battery packs. Time lost to charting and travel pushed Riley toward automation.

Needs & Pain Points

Needs

1. Reliable offline recording with automatic sync 2. One-tap exercise assignment templates 3. Batch notes from rep counts and flags

Pain Points

1. Dead zones breaking live telehealth 2. Nighttime charting after long drives 3. Manually counting noncompliant reps

Psychographics

- Efficiency-obsessed, hates duplicate documentation - Champions access for overlooked communities - Embraces tools that work offline - Values clean, glanceable clinical data

Channels

1. LinkedIn rural health groups 2. OT/PT Facebook communities 3. YouTube fieldwork workflows 4. APTA newsletters 5. SMS coverage alerts

P

Performance Patient Alex

- 27, club-level soccer forward - Urban apartment; small home gym corner - Tech-savvy; smartphone and smartwatch owner - Balances training with full-time analyst job

Background

Tore an ACL last season; swore off guesswork after a stalled first rehab. Now works with a sports PT, using data to justify each progression and protect the comeback.

Needs & Pain Points

Needs

1. Real-time form cues with specificity 2. Progression thresholds tied to metrics 3. Exportable data for coach and PT

Pain Points

1. Ambiguous cues that waste training time 2. Plateau blindness between sessions 3. Fear of re-injury with early progressions

Psychographics

- Data-driven, thrives on measurable progress - Competitive mindset, embraces structured routines - Prevention-focused after hard-earned lessons - Motivated by coach and PT feedback

Channels

1. Strava training logs 2. Instagram athlete content 3. YouTube rehab progressions 4. Discord team chat 5. Email weekly summaries

C

Claims Case Manager Maya

- 42, RN case manager at regional insurer - Manages 70+ musculoskeletal cases concurrently - Office-based; secure laptop and phone - Measured by cost and return-to-work timelines

Background

Moved from bedside nursing to utilization management after burnout. Learned data storytelling to align adjusters, providers, and employers; frustrated by vague progress notes lacking objective home-exercise metrics.

Needs & Pain Points

Needs

1. Read-only dashboards with adherence trends 2. Standardized export for authorizations 3. Early-warning flags for stagnation

Pain Points

1. Inconsistent clinic notes across providers 2. Delayed awareness of nonadherence 3. Disputes over visit necessity

Psychographics

- Outcome-obsessed, skeptical of anecdote - Values defensible, audit-ready documentation - Proactive when risk signals spike - Prefers concise dashboards over narratives

Channels

1. LinkedIn payer networks 2. Email secure summaries 3. Web portal dashboards 4. Zoom utilization reviews 5. Industry newsletters

C

Campus PT Educator Ethan

- 39, assistant professor in DPT program - Teaches kinesiology and therapeutic exercise labs - Manages 80 students per semester - Uses LMS and iPadOS devices

Background

Frustrated by subjective peer assessments in lab, Ethan piloted video-based feedback tools. Now seeks scalable, privacy-conscious tech to standardize form coaching and increase practice reps between classes.

Needs & Pain Points

Needs

1. Classroom mode with anonymized leaderboards 2. LMS integration for assignments 3. Form-flag rubrics aligned to coursework

Pain Points

1. Manual grading bottlenecks during busy weeks 2. Inconsistent peer feedback quality across sections 3. Student practice drop-off between labs

Psychographics

- Pedagogy-first, evidence-informed adopter - Values scalability without losing nuance - Privacy-conscious with student data - Enjoys gamified learning done responsibly

Channels

1. Faculty Slack workspace 2. LinkedIn academic circles 3. Educause forums 4. YouTube teaching demos 5. Email syllabus resources

M

Multilingual Mover Nadia

- 34, recent immigrant; intermediate English proficiency - Works hospitality evening shifts - Shared Android smartphone with spouse - Extended-family household; limited quiet space

Background

Missed early appointments from misunderstood instructions. Succeeded when a clinic used visual guides and her language; now wants the same clarity at home amid a busy household.

Needs & Pain Points

Needs

1. On-device translations and voice prompts 2. Large icons and gesture-led steps 3. Flexible reminders around shift work

Pain Points

1. Text-heavy instructions she can’t parse 2. Missed sessions after late shifts 3. Small-screen clutter and confusion

Psychographics

- Determined, appreciates clear visuals - Anxious about miscommunication and assumptions - Trusts clinician-approved, simple tools - Motivated by progress streaks and praise

Channels

1. WhatsApp voice notes 2. YouTube bilingual tutorials 3. SMS time-boxed reminders 4. Instagram community pages 5. Clinic patient portal

Product Features

Key capabilities that make this product valuable to its target users.

Smart Resume Link

Deferred deep link that remembers the patient’s SnapCode. If the app isn’t installed, it routes to the store and then auto-resumes directly into the assigned program post‑install—no retyping or searching—cutting drop‑offs during setup.

Requirements

Signed Smart Link Generation
"As a clinician, I want to create a single secure link for each patient so that they can install the app and land directly in their assigned program without extra steps."
Description

Provide clinicians with the ability to generate per-patient smart links that encapsulate the patient’s SnapCode as an opaque, signed token with configurable expiration and optional single-use constraints. The system should expose a secure backend API and a clinician-portal UI to create, preview, and copy links, embedding routing metadata such as target program, locale, platform hints, and campaign tags. Links must be tamper-evident via server-side signing, recorded with audit trails, and compatible with iOS and Android deep link formats to ensure seamless downstream processing.

Acceptance Criteria
Portal: Generate expiring single-use smart link
Given a clinician with LinkCreate permission is signed into the portal And a valid patient SnapCode exists When the clinician opens Generate Smart Link, selects target program, locale, platform hints, and campaign tag, sets expiration, and enables single-use And clicks Generate Then a smart link is created in ≤2 seconds and displayed with a Preview and Copy action And the URL contains a single opaque token parameter; no plaintext SnapCode or PII appears in the URL And the Preview shows target program name, expiration timestamp (local time), and single-use flag And clicking Copy places the exact URL onto the clipboard
API: Create signed smart link with metadata
Given a service client authenticated with a valid clinician session token When it POSTs to /v1/smart-links with patientSnapCode, targetProgramId, expiresAt, singleUse, locale, platformHints, and campaignTag Then the API responds 201 with linkUrl, linkId, expiresAt, singleUse, and an echo of submitted metadata And the token is signed server-side using the configured algorithm and rotating keys And the token encapsulates the SnapCode and metadata but is not decodable to plaintext without server keys And input validation errors return 400 with field-level messages; unauthorized requests return 401/403 And only users with LinkCreate permission may call the endpoint; responses are RBAC-enforced
Security: Tamper-evident link validation
Given a valid smart link When any part of the token or query string is altered, truncated, or replayed with modified metadata Then the resolver rejects the request with 400/403 without revealing patient identifiers And the event is logged with reason "signature_invalid" and no changes to usage counters And the response page shows a generic invalid link message with a CTA to request a new link
Policy: Expiration and single-use enforcement
Given a smart link with expiresAt in the future and single-use enabled When the link is opened the first time before expiration Then the app resumes into the assigned program and the link is marked as consumed with consumedAt timestamp When the link is opened again or after expiration Then access is denied with an expired/used message and HTTP 410, and no app resume payload is delivered And clock skew of ±5 minutes does not allow bypassing expiration And clinicians can regenerate a replacement link; the previous link remains invalid
Routing: iOS/Android deep link and store fallback
Given a patient on iOS without the app installed When they tap the smart link Then they are routed to the App Store and, on first launch post-install, the app auto-resumes into the assigned program using the embedded SnapCode and metadata without re-entry Given a patient on Android without the app installed When they tap the smart link Then they are routed to Google Play and, on first launch, the app auto-resumes into the assigned program using the embedded SnapCode and metadata Given the app is already installed on either platform When the link is tapped Then the app opens directly via Universal Links/App Links and receives the payload with SnapCode, targetProgramId, locale, platformHints, and campaignTag
Audit: Creation and resolution trails
Given a clinician generates a smart link Then an audit record is written with linkId, actorId, patientRef (hashed), metadata snapshot, createdAt, and expiresAt When the link is resolved, expired, rejected for tampering, or consumed Then an audit record is written with eventType, linkId, userAgent, IP hash, outcome, and timestamp And clinicians can view audit summaries (created, last accessed, status: Active/Consumed/Expired/Invalid) in the portal list
Cross-Platform Deferred Deep Linking & Store Routing
"As a patient, I want the link from my clinician to take me to the app store if needed and then open directly to my program after installation so that I don’t have to search or re-enter anything."
Description

Implement universal/app links and deferred deep linking for iOS and Android to ensure users without the app are routed to the correct app store and, post-install, are returned to the intended in-app destination. The solution must detect installation state, choose the appropriate path, and preserve the smart link payload across the install using platform-supported mechanisms or a trusted provider. Include a lightweight web fallback page for unsupported environments and ensure compliance with platform policies while minimizing taps and latency.

Acceptance Criteria
Deferred Deep Link Resume With SnapCode Post-Install
Given a patient taps a Smart Resume Link on a device without MoveMate installed When the app is installed and first opened from the store Then the app automatically navigates to the assigned program screen associated with the SnapCode And no retyping or in-app search is required to reach the program And the number of taps after install to reach the destination is 0 or 1 (Open only) And the displayed program identifier matches the SnapCode payload
Store Routing When App Is Not Installed
Given a Smart Resume Link is opened on iOS or Android and MoveMate is not installed When the OS resolves the link Then the user is taken to the correct App Store or Google Play listing for MoveMate for their region And no intermediate browser page is shown before the store page unless the environment blocks universal/app links And the store referrer or equivalent carries the payload necessary for deferred deep linking
Universal/App Link Association and Direct Open When Installed
Given MoveMate is installed and the device supports universal/app links When a Smart Resume Link is tapped Then the OS opens MoveMate directly (not a browser) And the apple-app-site-association and assetlinks.json files are reachable and validate the app identifiers And the app navigates to the in-app destination encoded in the link
Payload Preservation, Validation, and Destination Mapping
Given a Smart Resume Link contains a SnapCode and a signed timestamped payload When the app is opened via deferred deep link after install Then the app receives the SnapCode payload and verifies its integrity (valid signature and unexpired TTL) And the payload maps to an existing patient program and navigates to that program And if the payload is invalid, expired, or unmapped, the app shows a friendly error and safely routes to the home screen without crashing
Web Fallback Page For Unsupported Environments
Given the Smart Resume Link is opened in an environment that cannot open universal/app links (e.g., desktop, unsupported browser, restricted in-app webview) When the link resolves Then a lightweight fallback page loads under 1.5s at p95 and is under 150KB total transfer And the page presents a single clear CTA per platform to install/open MoveMate And the SnapCode payload is preserved via URL and secure storage and appended to the subsequent store/app open so the destination is restored post-install
Platform Policy and Privacy Compliance
Given platform policies for links and attribution When implementing routing and deferred deep linking Then iOS uses Associated Domains entitlements and universal links without clipboard or device fingerprinting And Android uses App Links with Digital Asset Links and Play Install Referrer API without device fingerprinting And no prohibited redirects or dark patterns are used, meeting App Store and Google Play review requirements
Performance, Reliability, and Tap Minimization
Given a valid Smart Resume Link When the app opens via direct or deferred deep linking Then 95%+ of valid sessions reach the intended destination on first app open And median time from app launch to destination is <= 2s and p95 <= 3s on representative devices and networks And the total taps from link to destination are 1 when installed and <= 2 after install (Open plus any required system confirmation)
Secure Token Exchange & Patient Binding
"As a security-conscious product owner, I want the smart link to exchange a signed token server-side so that patient identity and assignments are validated without exposing sensitive data."
Description

On first launch after installation or when opening via a smart link, the app must securely exchange the signed token for a short-lived server session that validates the SnapCode, confirms program assignment, and binds the device to the patient record. Implement anti-replay protections, token expiration checks, and rate limiting, and ensure that no protected health information is exposed in client-visible URLs. The backend should return only the minimum data needed to resume, with subsequent authenticated fetches for full program details.

Acceptance Criteria
Smart Link Resume on First Launch or Existing Install
Given a user opens MoveMate via a valid Smart Resume Link (post-install or already installed) When the app launches and posts the signed token, device_id, and app_version to /v1/token/exchange over TLS 1.2+ Then the server validates the token signature and SnapCode, confirms program assignment, binds the device to the patient record, and returns 201 with a short-lived session (TTL <= 15 minutes), patient_binding_id, program_id, and resume_route And the app navigates directly to the assigned program context without requiring SnapCode input And no additional protected data is returned in this response
Token Expiration and Clock Skew Handling
Given a Smart Resume token that is expired (age > 10 minutes) or has an invalid signature When the client attempts /v1/token/exchange Then the server returns 401 with an error code of token_expired or token_invalid, creates no session, and performs no device binding And tokens within a clock skew tolerance of ±2 minutes around expiration are still accepted And the app displays an "Link expired" state with a CTA to request a new link; it does not auto-resume
Anti-Replay and Idempotent Retry
Given a signed token that has already been successfully exchanged When the same token is submitted again from any device or IP Then the server rejects the request with 409 replay_detected (or 401), creates no new session, and does not modify bindings And if the previous client request timed out, a retry within 60 seconds with the same Idempotency-Key and device_id returns the original success response without duplicating bindings or sessions
Rate Limiting on Exchange Endpoint
Given multiple exchange attempts against /v1/token/exchange When attempts exceed per-token limit of 3, per-IP limit of 10 per minute, or per-device limit of 5 per hour Then the server responds 429 Too Many Requests with a Retry-After header and does not validate or bind the token And security metrics/logs record the throttled event with hashed token_id, device_id, IP, and timestamp
PHI-Free URLs, Storage, and Logs
Given any smart link, app store redirect, or in-app deep link Then no PHI (e.g., name, DOB, diagnosis, clinician identity, program title, notes) or SnapCode appears in client-visible URLs, referer headers, or clipboard And the token is opaque and non-decipherable client-side; SnapCode is never rendered during auto-resume And client/server logs and analytics redact tokens and identifiers; network inspection confirms no PHI or SnapCode in query strings or paths
Minimum Data Return and Deferred Detail Fetch
Given a successful token exchange Then the response payload contains only session_token, session_expiry, patient_binding_id, program_id, and resume_route, and excludes exercise names, clinician names, goals, measurements, or notes And the app fetches full program details only after establishing the session via an authenticated request; attempting to fetch without a session returns 401 And packet capture of the exchange response confirms only the minimum fields are returned
Device Binding and Unbinding Rules
Given a first successful token exchange from a device Then the server binds that device_id to exactly one patient record and records timestamp and app_version And subsequent exchanges for the same patient on the same device refresh the session without creating duplicate bindings And attempts to bind a different patient to the same device are rejected with 409 device_already_bound unless the prior binding is explicitly revoked via logout/unbind, which invalidates active sessions
First-Launch Auto-Resume to Assigned Program
"As a patient, I want the app to open straight into my prescribed exercises so that I can start immediately without going through setup screens."
Description

Upon successful token validation, the app should bypass generic onboarding and navigate the patient directly to their assigned program overview, prefetching necessary assets and displaying a brief confirmation banner that the program was loaded from their clinician’s link. If an account step is required by policy, prefill data from the token and defer non-critical setup until after the program is visible. Ensure idempotent navigation so repeated link opens do not duplicate flows.

Acceptance Criteria
Post-Install Auto-Resume via Deferred Deep Link (App Not Installed)
Given a patient taps a Smart Resume Link with a valid token and MoveMate is not installed When the user is routed to the app store, installs, and first-launches the app Then the app reads the deferred deep link token and navigates directly to the assigned program overview, bypassing generic onboarding And the program overview renders within 5 seconds of splash screen dismissal And a confirmation banner stating "Program loaded from your clinician’s link" is shown for 2–4 seconds And no SnapCode entry or search is required at any point before the overview is visible And the token is consumed after use so subsequent app launches do not re-trigger the resume flow
Direct Resume with App Installed (Bypass Onboarding)
Given MoveMate is installed and the patient taps a Smart Resume Link with a valid token When the app is brought to foreground via the link and the token is validated Then the patient is taken directly to the assigned program overview within 2 seconds, bypassing generic onboarding And the initial exercise list and thumbnails are present on first paint And a confirmation banner is displayed for 2–4 seconds And the previous navigation stack is replaced so that Back does not reveal onboarding screens
Minimal Account Step with Token Prefill and Deferred Setup
Given clinic policy requires an account step and the Smart Resume token includes patient identifiers (e.g., name, email) When the app resumes into the assigned program via the Smart Resume Link Then the program overview becomes visible before any non-critical setup prompts And the account step UI is prefilled with name and email from the token And starting the first exercise is allowed unless the policy marks the step as critical; if critical, a consent/verification modal appears after the overview renders and blocks exercise start until completion And all non-critical setup is deferred and surfaced via a non-blocking prompt after the program is visible
Idempotent Navigation on Repeated Link Opens
Given the patient opens the same Smart Resume Link multiple times while the app is installed (foreground, background, or terminated) When the link is triggered again within the same session Then only one instance of the assigned program overview exists in the navigation stack And no duplicate onboarding, confirmation banners, or dialogs are presented And duplicate prefetch/download tasks for the same assets are not enqueued And Back navigation behavior remains unchanged compared to a single open
Prefetch of Program Assets and Confirmation Banner Display
Given the assigned program contains up to 30 exercises with associated media and thumbnails When the program overview is first shown via Smart Resume Then exercise metadata and thumbnails are prefetched in the background within 15 seconds And the first exercise media is available to play within 3 seconds of tap And if prefetch fails, the app retries up to 2 times and falls back to on-demand fetch without blocking the overview And the confirmation banner is non-blocking and dismissible while remaining visible for 2–4 seconds
Graceful Handling of Invalid or Expired Tokens
Given the Smart Resume Link token is invalid, expired, or does not match any patient assignment When the patient opens the link Then the app routes to a safe fallback within 2 seconds without crashing And the user sees a clear message explaining the issue and options to "Try again", "Enter SnapCode", or "Contact clinic" And no partial or incorrect program state is cached And generic onboarding is only shown if the user explicitly chooses it from the fallback
Expiration, Revocation, and Fallback UX
"As a clinician, I want to expire or revoke smart links and have patients see helpful guidance if a link no longer works so that access remains secure and confusion is minimized."
Description

Support link expiration, clinician-initiated revocation, and conflict handling for already-used or mismatched tokens, with clear, localized in-app and web messages. Provide safe fallbacks to general onboarding or a request-new-link flow without exposing sensitive context. Expose controls in the clinician portal to set expiry windows, revoke links, and view status, and log all failures for operational visibility.

Acceptance Criteria
Expired Smart Resume Link UX
Given a Smart Resume Link token whose expiry timestamp is earlier than current server time When the patient opens the link via web or in-app Then the server returns an "expired" state before resolving any SnapCode or program context And the client shows a localized "Link expired" message with no program/clinician identifiers And the client presents two CTAs: "Request new link" and "Continue to onboarding" And selecting "Request new link" starts a verified contact flow (email/SMS) and is rate-limited to 3 requests/hour/device And selecting "Continue to onboarding" routes to general onboarding with no prefilled data And analytics event smart_resume_link.failed is recorded with reason=expired, link_id, hashed_user_id, channel (web/app), locale, and timestamp; no PHI in payloads/URLs And web deep links respond with HTTP 410 Gone and cache-control: no-store
Revoked Smart Resume Link UX
Given a Smart Resume Link token marked as revoked by a clinician in the portal When the patient opens the link via web or in-app Then the server returns a "revoked" state and prevents association with any account or program And the client shows a localized "Link no longer valid" message with CTAs: "Request new link" and "Continue to onboarding" And analytics event smart_resume_link.failed is recorded with reason=revoked, link_id, hashed_user_id, channel, locale, timestamp; no PHI in payloads/URLs And web deep links respond with HTTP 403 Forbidden and cache-control: no-store And subsequent attempts with the same token are consistently handled as revoked
Already-Used or Mismatched Token Conflict Handling
Given a single-use Smart Resume Link token already redeemed OR a token whose SnapCode does not match the currently signed-in user When the link is opened or auto-resolves post-install Then the server returns a state of "used" or "mismatch" accordingly And the client shows localized messages: - Used: "This link has already been used" with CTA: "Request new link" - Mismatch: "This link isn’t for this account" with CTAs: "Switch account", "Request new link", and "Continue to onboarding" And auto-association is blocked until the correct account is signed in or a new link is provided And the app does not display any program, clinician, or patient identifiers And analytics events smart_resume_link.failed are recorded with reason=used or reason=mismatch, link_id, hashed_user_id (if available), channel, locale, timestamp; no PHI And repeated invalid attempts are rate-limited (e.g., 5 per 10 minutes per device)
Post-Install Deferred Deep Link Validation and Fallback
Given a patient taps a Smart Resume Link without the app installed When the app is installed and launched the first time Then the app retrieves the deferred token and validates it server-side before loading any program screen And if valid/unexpired/unused and matched, the app auto-resumes into the assigned program within 2 seconds on Wi‑Fi and 5 seconds on cellular (p50) And if invalid (expired/revoked/used/mismatch) the app routes to the corresponding error flow defined in other criteria And if network-unavailable or timeout (>10s), the app shows a localized "Can’t verify link" message with CTAs: "Try again" and "Continue to onboarding"; auto-retries up to 2 times And the token is not persisted on-device beyond 24 hours or after successful redemption, whichever comes first And all outcomes log event smart_resume_link.resolve with outcome, latency_ms, link_id, channel=deferred, locale; no PHI
Clinician Portal: Expiry, Revocation, and Status Controls
Given a clinician user with permission "Manage Smart Resume Links" When creating links, the clinician can set an expiry window between 1 hour and 30 days (default 14 days) with inline validation And the portal displays per-link status: Active, Expired, Revoked, Used, with created_at, expires_at, last_accessed_at, and channel of last access And the clinician can revoke any Active link individually or in bulk; revocation propagates to APIs within 60 seconds (p95) And all create/update/revoke actions are written to an immutable audit log with actor_id, action, link_id, timestamp, prior_value, new_value; exportable to CSV And link detail views do not display PHI (no patient name/diagnosis); patient references use alias or hashed MRN only And access is role-restricted and logged; unauthorized users see controls disabled and receive HTTP 403 on APIs
Localization and Accessibility for Error and Fallback Messages
Given supported locales en-US, es-ES, and fr-FR (initial set) When any expired/revoked/used/mismatch/offline message or CTA is shown on web or in-app Then translations exist and are selected by device/browser locale; if missing, fall back to en-US And copy is consistent between web and app via shared message keys And messages and CTAs contain no clinician, program, or patient identifiers And UI meets WCAG 2.1 AA for contrast, focus, and screen reader labels; scalable text up to 200% without loss of content And localization QA includes screenshots for each state per locale and platform prior to release
Analytics and Funnel Tracking
"As a product manager, I want visibility into each step of the smart link funnel so that we can identify drop-offs and improve conversion."
Description

Instrument the end-to-end funnel from link creation to click, store visit, install, first open, token receipt, and program load, including error points and time-to-complete metrics. Attribute events to clinician, campaign, and platform while respecting privacy and consent. Surface a dashboard and exportable reports to monitor drop-offs and guide optimization, and ensure event de-duplication across devices and sessions.

Acceptance Criteria
End-to-End Funnel Event Capture
Given a Smart Resume Link is generated with clinician_id, campaign_id, platform and a unique funnel_id When a patient progresses through link_clicked, store_visit, app_installed, first_open, token_received, and program_loaded Then an event for each step is recorded with ISO 8601 timestamp, funnel_id, step_index, and (if consented) device_id in chronological order Given intermittent connectivity When events cannot be transmitted immediately Then they are queued locally and sent within 60 seconds of network restoration and persist across app restarts Given a funnel reaches program_loaded When sequence integrity is validated Then ≥99% of completed funnels contain all prior step events; any missing step results in a gap_detected event with the missing step name
Attribution Integrity Across Install and Resume
Given a deferred deep link containing clinician_id, campaign_id, platform, and a nonce When the app is installed and first opened via the store and the program loads via auto-resume Then all funnel events retain clinician_id, campaign_id, platform, and the originating nonce for attribution Given the same Smart Resume Link is tapped on multiple devices When token_received occurs on one device Then attribution is assigned to the device whose nonce matches token_received and other device sequences are marked abandoned without counting as conversions Given clinician or campaign parameters are missing on the link When events are recorded Then clinician_id defaults to the link owner and campaign_id defaults to "default" without breaking the funnel
Privacy, Consent, and PII Controls
Given first_open occurs before user consent When analytics initialization runs Then only first_open_unconsented with funnel_id and platform is recorded; no device_id, IP address, SnapCode, or other PII is logged Given the user grants analytics consent When subsequent events are recorded Then a pseudonymous user_id is used and no names, emails, or SnapCode values are persisted in analytics payloads or logs Given the user revokes consent When the opt-out is saved Then analytics collection stops within 60 seconds and the user is excluded from future exports while historical events remain de-identified Given the data retention policy When raw analytics events exceed 13 months of age Then they are automatically purged and the deletion is logged with job_id and count
Event De-duplication Across Devices and Sessions
Given network retries or app restarts When multiple events with the same event_id and funnel_id are received within a 24-hour window Then only one event instance is stored and counted; duplicates are idempotently discarded Given an app reinstall on the same device When first_open occurs Then a new funnel_id is generated and prior funnels remain closed so conversions are not double-counted Given program_loaded fires more than once for the same funnel_id When counting conversions Then only the earliest program_loaded is counted; later ones are labeled duplicate_program_loaded
Time-to-Complete Metric Computation
Given a funnel with link_clicked, store_visit, app_installed, first_open, token_received, and program_loaded When metrics are computed Then step-to-step durations and total time_to_program_load are calculated in seconds and stored per funnel_id Given streaming ingestion When program_loaded is received Then all duration metrics for that funnel are available on the dashboard within 15 minutes Given aggregated reporting When viewing metrics by clinician, campaign, and platform Then median, p75, and p95 time_to_program_load are displayed and match recomputation from raw events within ±1%
Error Instrumentation and Threshold Alerts
Given an error occurs at any step (e.g., deep_link_invalid, store_redirect_failed, token_timeout, program_load_failed) When the error is detected Then an error_event is logged with error_code, step, message, platform, app_version, os_version, and funnel_id Given an alert threshold of 2% error rate per step When error_events exceed 2% of funnels in a 15-minute rolling window per clinician, campaign, and platform Then an alert is generated and visible on the dashboard with timestamp, affected dimensions, and current rate Given a non-retriable program_load_failed When the error is logged Then the funnel is closed as failed and excluded from conversion counts while remaining visible in drop-off analysis
Dashboard Visualization and Exportable Reports
Given a selected date range with filters for clinician, campaign, and platform When viewing the analytics dashboard Then counts, conversion rates per step, and drop-off percentages are displayed with data freshness under 60 minutes Given an export request When the user exports reports Then a CSV is generated containing one row per dimension per day with step counts, conversions, drop-offs, and time metrics (median, p75, p95), and is available for download and scheduled delivery (S3/SFTP) Given a drill-down on a clinician and campaign When the segment is selected Then the dashboard lists the top 10 error codes and average time_to_program_load for that segment and matches export values within ±1%
Automated QA and Simulator Harness
"As a QA engineer, I want automated coverage of deferred deep linking scenarios so that regressions are caught before release across devices and OS versions."
Description

Create automated tests and scripts to validate deferred deep link flows across supported iOS and Android versions, including cold-start, warm-start, reinstall, and multi-link scenarios. Provide a simulator-friendly harness and CI jobs that verify token integrity, routing accuracy, and UX fallbacks before release, with artifacts and logs for debugging intermittent failures.

Acceptance Criteria
iOS Cold-Start Deferred Deep Link Auto-Resume
Given a valid deferred deep link L with SnapCode S on a supported iOS version and MoveMate is not installed When the user opens L, is directed to the App Store, installs MoveMate, and opens the app for the first time Then within 5 seconds the app navigates to ProgramOverview for S without requiring manual input And exactly one resolve request for S is sent to the backend and returns 200 And the destination route and programId match the resolve payload And an analytics event deep_link_resume_success is logged with {platform:iOS, cold_start:true, snapCode:S} And the persisted deep link token is cleared after successful navigation
Android Warm-Start Direct Routing From Installed App
Given MoveMate is installed and running/backgrounded on a supported Android version and a valid deep link L with SnapCode S is tapped When L is opened Then the existing task is brought to foreground and ProgramOverview for S is shown within 2 seconds And only one MainActivity instance exists (no duplicate activities) And exactly one resolve request for S is sent and returns 200 And an analytics event deep_link_resume_success is logged with {platform:Android, cold_start:false, snapCode:S} And back navigation returns to the prior screen/state without re-triggering the deep link
Reinstall Flow Preserves Deferred Link and Clears Stale State
Given the app was previously installed and then uninstalled, and the user opens a valid deferred deep link L with SnapCode S and reinstalls MoveMate When the app is first launched post‑install Then ProgramOverview for S is shown within 5 seconds without manual input And no prior user session or patient data is present on device (fresh install state) And exactly one resolve request for S is sent and returns 200 And analytics logs deep_link_resume_success with {reinstall:true, snapCode:S} And no duplicate resume events are emitted on subsequent launches
Multi-Link Race Handling and Most-Recent Precedence
Given the user opens two or more different deferred deep links L1(S1), L2(S2) within 60 seconds before first app launch/install completes When MoveMate launches post‑install Then exactly one program is resumed using the most recently opened link (S2) and navigation occurs within 5 seconds And no navigation to S1 occurs and S1 is marked discarded in logs And exactly one resolve request is counted as successful; any earlier resolves are canceled or ignored And analytics emit deep_link_resume_discarded for S1 and deep_link_resume_success for S2 And only one session is created in the backend
Token Integrity, Expiry, and Tamper Fallback
Given a deep link token is expired or has an invalid signature for SnapCode S When the link is opened on supported iOS or Android versions Then the app does not navigate to a program screen And a non-blocking fallback screen is shown within 2 seconds with error code (expired|tampered) and a Retry/Get New Link CTA And no token is persisted locally and no protected resources are requested And the resolve request returns 401/403 and is logged And analytics logs deep_link_resume_failed with {reason:expired|tampered, snapCode:S} And the app remains stable without crash or ANR
Simulator Harness, CI Matrix, and Artifact Generation
Given the simulator/emulator harness CLI is executed in CI with the defined OS/device matrix When tests for cold‑start, warm‑start, reinstall, multi‑link, and failure fallbacks run Then results are produced in JUnit XML with pass/fail per scenario and non‑zero exit code on any failure And per‑run artifacts include: console logs, network traces (HAR), screenshots, and videos for each test And failed tests are auto‑retried up to 2 times with deterministic seeds and retries are marked in reports And artifacts are uploaded to CI storage and retained for at least 14 days And the harness runs headless on Xcode simulators and Android emulators without physical devices
Offline/Intermittent Network UX Fallback and Resume Preservation
Given the device is offline or experiences a DNS timeout when a valid deep link L with SnapCode S is opened When MoveMate attempts to resolve S Then an offline fallback is shown within 2 seconds with a Retry CTA and help link And S is securely cached for up to 24 hours to allow auto‑resume on next successful connectivity And no crash/ANR occurs and the app does not navigate to a program screen while offline And analytics logs deep_link_resume_failed with {reason:offline, snapCode:S} And on next launch with connectivity restored and within 24 hours, ProgramOverview for S auto‑resumes and logs success

Auto Language

Instantly localizes onboarding and program instructions based on device language or a one-tap picker on the SnapCode screen. Uses clear icons and concise phrasing to reduce confusion and intake errors for multilingual patients.

Requirements

Auto-detect Device Language
"As a multilingual patient, I want the app to automatically display content in my device language so that I can understand instructions without setup."
Description

On first launch and at session start, read the device’s primary locale and automatically localize onboarding and program instruction strings. Apply a deterministic fallback chain (exact locale > base language > English) and respect region-specific variants (e.g., es-MX vs es-ES). Detection runs client-side and does not require sign-in. The detected language remains in effect until the user explicitly changes it. Integrates with the existing i18n layer and supports runtime language switching without app restart.

Acceptance Criteria
First Launch Auto-Locale Detection (Pre-Login)
Given a fresh install with no stored language preference and the device primary locale set to "fr-CA" When the app is launched for the first time before sign-in Then onboarding and program instruction strings render using "fr-CA" resources if available; otherwise fall back per the chain (fr-CA → fr → en) And the resolved locale code is stored in client preferences And no network requests are required to resolve the locale And the initial UI renders with the resolved locale before the first interactive screen is displayed
Session Start Locale Re-Evaluation (No User Override)
Given no explicit in-app language has been set by the user And the device primary locale is changed from "en-US" to "es-MX" between sessions When the app is started Then the app re-resolves locale to "es-MX" (or falls back es-MX → es → en) and applies it to onboarding and program instruction strings And the stored locale preference is updated to the new resolved value And the locale is applied on startup without requiring an app restart
Deterministic Fallback Chain at Key Level
Given the device locale is "pt-BR" and some onboarding/program instruction keys are missing in pt-BR but exist in pt When those strings are rendered Then each missing key falls back to "pt" for that key And any keys missing in "pt" fall back to "en" And the fallback order is strictly exact-locale → base-language → English with no deviations within the same render cycle
Region Variant Selection (es-MX vs es-ES)
Given the device locale is "es-MX" and both "es-MX" and "es-ES" resource bundles exist When onboarding and program instruction strings are rendered Then "es-MX" resources are used, not "es-ES" And when the device locale is "es-AR" with only base "es" available Then "es" resources are used
Persistence Until Explicit User Change
Given the app previously resolved locale to "fr" and stored it And the user has not opened the in-app language picker to change language When the app is relaunched across sessions Then the app continues to use "fr" for onboarding and program instruction strings And when the device language changes to "de-DE" but the user had manually selected "fr" earlier Then the app continues to use "fr" on subsequent launches until the user explicitly selects a different language in-app
Runtime Language Switching Without App Restart
Given the app is running with locale "en" and the user opens the language picker When the user selects "de" and confirms Then all visible onboarding and program instruction strings update to "de" within 1 second without restarting the app And subsequent navigations show "de" for those strings And the stored preference is updated to "de" and auto-detect does not override this on future starts
Client-Side Detection and i18n Integration (Offline, Pre-Sign-In)
Given the device is offline and the user is not signed in When the app launches or a new session starts Then locale detection and string resolution complete using the existing client i18n layer without any server calls And onboarding and program instruction strings render in the resolved locale following the fallback chain And behavior is consistent on iOS and Android for the same device locale
One-tap Language Picker on SnapCode
"As a patient scanning a code, I want to change the language with one tap so that I can proceed without confusion."
Description

Add a globe/icon entry point on the SnapCode screen that opens a modal list of supported languages in their native names/scripts. Provide one-tap selection, instant preview, and clear confirmation. Include concise helper text and recognizable icons to reduce confusion. The list is searchable, respects RTL languages, and avoids country flags. Selection persists immediately and is accessible (screen reader labels, focus order, large text).

Acceptance Criteria
Globe Entry Point on SnapCode Screen
Given I am on the SnapCode screen When the screen loads Then a language picker button with a globe icon is visible in the header area, has an accessible name "Language picker", meets a minimum touch target of 44x44 points, is reachable by keyboard/focus navigation, and does not obstruct the SnapCode scanning area And concise, localized helper text to indicate language selection is visible near the picker or instructions without truncation
Modal Language List in Native Names Without Flags
Given I tap the language picker button When the language modal opens Then a scrollable list of supported languages is displayed, each item showing the language name in its native script and an English translation in parentheses And no country flags are displayed anywhere in the modal And the modal provides a visible Close control and supports backdrop tap or system back to dismiss
One-Tap Selection with Instant Preview and Confirmation
Given the language modal is open and a language is not currently selected When I tap a language item Then the selection applies immediately and the modal closes And the SnapCode screen text (titles, buttons, helper text) updates to the selected language within 1 second without app restart And a non-blocking confirmation (e.g., toast) appears stating "Language set to <Native Name>" And reopening the modal shows a checkmark or selected state on the chosen language
Right-to-Left (RTL) Layout Support
Given I select an RTL language (e.g., Arabic, Hebrew, Farsi) When the SnapCode screen reappears after selection Then layout direction switches to RTL: text aligns right, navigational icons are mirrored where appropriate, and focus traversal proceeds right-to-left And the search field in the language modal aligns right with cursor and placeholder mirroring And switching back to an LTR language restores LTR layout and focus order
Searchable Language List
Given the language modal is open When I type in the search input Then the list filters in real time by native names and English names with case-insensitive and diacritic-insensitive matching And a Clear control resets the query and restores the full list And when no results match, an empty state message "No languages found" is displayed
Accessibility and Large Text Compliance
Given a screen reader is active or system text size is set to large (up to 200%) When I navigate the SnapCode screen and open the language modal Then all interactive elements expose name, role, and state to assistive tech; the language picker announces as a button; the selected language announces as "selected"; and dynamic language changes are announced politely And focus order is logical, cyclic, and trap-free; all tap targets are at least 44x44 points; text scales without clipping or overlap; and color contrast meets WCAG 2.1 AA
Immediate and Durable Language Persistence
Given I have selected a language from the modal When I navigate to other screens and return to the SnapCode screen or relaunch the app Then the selected language remains applied across the app and persists across sessions And the preference loads and applies without network connectivity and before the SnapCode screen is shown
Persistent Language Preference
"As a returning patient, I want the app to remember my language so that I don’t have to reselect it."
Description

Persist the chosen language at both local (device) and server profile levels. For anonymous users, store locally and seamlessly migrate to the profile upon sign-in or clinician linking. Sync across devices and respect explicit user choice over auto-detection. Provide a safe reset to system language and clear display of the current selection in Settings. Handle edge cases (unsupported locales, deleted profiles) with graceful fallbacks.

Acceptance Criteria
Explicit Choice Overrides Auto-Detection
Given the app launches on a device with system language set to French and no explicit language has been chosen, When the app starts for the first time, Then the app language is set to French. Given the user explicitly selects English in the language picker, When the device system language later changes or the app relaunches, Then the app UI remains in English on all screens and sessions. Given an explicit language preference exists, When auto-detection runs, Then auto-detection does not override the explicit preference.
Anonymous Preference Migrates on Sign-In or Clinician Linking
Given an anonymous user selects Portuguese (pt-BR), When the selection is made, Then the app stores the BCP 47 locale code and a UTC timestamp locally. Given an anonymous user with a stored language later signs in or is linked by a clinician, When the first profile sync completes, Then the server profile language is updated to the locally stored language and the local cache is marked as synced. Given a different language already exists on the server profile, When migration occurs, Then the most recent explicit choice by timestamp wins and the other value is overwritten. Then the app UI reflects the winning language immediately after sync without requiring restart.
Cross-Device Sync Propagates Latest Explicit Preference
Given a signed-in user has language = English on their profile, When the user changes language to German on Device B in Settings, Then Device B updates the server within 5 seconds and reflects German immediately. Given Device A is online, When the server language changes to German, Then Device A applies German within 60 seconds or on next foreground, whichever comes first. Given a device was offline during the change, When it next reconnects and syncs, Then it applies the latest server language before rendering the home screen. Then the most recent explicit language change (by server timestamp) is the single source of truth across all devices.
Reset to System Language
Given the user has an explicit preference set to Spanish while the device system language is French, When the user taps "Reset to System Language" in Settings and confirms, Then the explicit preference is cleared locally and on the server, the UI switches to French within 1 second, and the choice persists after app relaunch. Given other devices are signed in to the same profile, When reset occurs on any one device, Then other devices adopt the system-language-derived setting on next sync unless they have their own explicit override set.
Unsupported Locale Fallbacks
Given the user selects a regional variant (e.g., es-MX) that is unsupported while the base language (es) is supported, When the preference is saved, Then the app uses and stores the base language (es) as the effective preference and reflects it in the UI. Given the user selects a language that is entirely unsupported, When the preference is evaluated, Then the app falls back to English and stores English as the effective preference. Given the device system language is unsupported and no explicit preference exists, When the app launches, Then the app defaults to English. Then the current effective language is shown correctly in Settings and persists across relaunches and devices.
Deleted Profile Handling
Given the server indicates the user profile has been deleted, When the user signs in again or is re-linked by a clinician, Then any locally stored language preference seeds the new server profile and is applied to the UI immediately after first sync. Given no local language exists upon re-link, When a new profile is created, Then the app applies device system language if supported, else English, and stores it as the new preference. Then subsequent device sessions reflect the seeded language consistently.
Settings Displays and Updates Current Language
Given a language preference exists, When the user opens Settings > Language, Then the current language is displayed using the endonym and region (e.g., "Deutsch (Deutschland)") matching the stored locale code. When the user selects a different supported language from the list, Then the UI updates within 500 ms without app restart and the choice persists after relaunch. Then only supported locales are shown, the selected locale is indicated with a checkmark, and the stored value matches the displayed selection.
Localized Program Instructions and Form Feedback
"As a patient performing exercises, I want instructions and form feedback in my language so that I can follow my program correctly."
Description

Localize all exercise-related content: exercise names, step-by-step cues, tempo/time/rep guidance, safety warnings, and computer-vision form error messages. Support ICU message syntax for pluralization and variables (e.g., reps, seconds), locale-aware number/date/units (metric/imperial), and full RTL layout mirroring. Ensure concise phrasing that fits small screens, with safe truncation rules. Provide a mapping layer from CV error codes to localized, patient-friendly messages.

Acceptance Criteria
Auto-select and override language for program instructions
Given the device language is supported When the app launches Then all exercise names, cues, guidance, safety warnings, and form feedback render in that locale on first load Given the device language is unsupported When the app launches Then program instructions and form feedback fall back to the default locale (en) without any mixed-language strings Given the user opens the SnapCode screen When they pick a different language from the one-tap picker Then all instruction and feedback strings update to the selected locale within 1 second and persist across app restarts Given a clinic default language exists and the patient has chosen a language When loading a program Then the patient’s choice takes precedence, else clinic default, else device language Given the device is offline When the user switches language Then previously cached translations are used and missing keys gracefully fall back to default locale
ICU pluralization and variables in instructions and errors
Given reps=0 When rendering an instruction "Complete {reps} {reps, plural, =0 {reps} one {rep} other {reps}}" Then the output is correctly localized for zero in the selected locale (e.g., "0 reps" in en) Given reps=1 When rendering the same instruction Then the output is "1 rep" in the selected locale without placeholder tokens Given reps=5 When rendering the same instruction Then the output is "5 reps" in the selected locale with locale-aware number formatting Given holdSeconds=1 and holdSeconds=5 When rendering "Hold for {holdSeconds} {holdSeconds, plural, one {second} other {seconds}}" Then singular and plural forms are correct in instructions and CV error messages Given variables such as {seconds}, {reps}, {angle} When rendering any ICU message Then all variables substitute with locale-formatted numbers and units and no raw ICU syntax is displayed
Locale-aware numbers, dates, and units in guidance
Given locale=en-US When showing load and length Then weights display in lb, lengths in in, decimals use a dot (e.g., "12.5 lb"), and times use 12-hour clock with AM/PM Given locale=fr-FR When showing load and length Then weights display in kg, lengths in cm, decimals use a comma (e.g., "12,5 kg"), and times use 24-hour clock with localized month/day names Given the patient has an explicit unit preference When rendering guidance Then the preference overrides locale defaults consistently across all instruction and feedback surfaces Given numeric values with precision When converting units Then values are rounded and displayed to one decimal place using the locale’s separators Given dates in program schedules When displayed to the patient Then they use the locale’s date order and month names (e.g., en-GB "5 Jan 2026")
Full RTL layout mirroring for instructions and feedback
Given the selected language is RTL (e.g., ar, he) When viewing program instructions Then the entire layout mirrors horizontally, text aligns right, and navigation/back/next icons are directionally correct Given the selected language switches between LTR and RTL via the picker When the screen re-renders Then no UI elements overlap or clip and reading order remains logical Given CV form feedback overlays with directional hints When viewed in RTL Then arrows, progress indicators, and alignment mirror appropriately while measurements remain correctly formatted Given mixed content (numbers/units within RTL text) When displayed Then punctuation, numerals, and units appear in the correct visual order for the locale
Safe truncation and small-screen readability for instructions
Given the smallest supported viewport (e.g., 320pt/360dp) When rendering exercise names, cues, and warnings Then no text overflows or is clipped; lines wrap up to 2 lines and otherwise truncate with an ellipsis at a word boundary Given truncated strings with ICU variables When displayed Then variables and their units remain visible and are not truncated mid-token Given system font scaling up to 120% When viewing instructions Then content remains readable without overlap; critical safety warnings are never truncated and expand/scroll if needed Given accessibility screen readers When focusing any instruction or feedback string Then the full, untruncated text is read aloud
CV error code mapping to localized patient-friendly messages
Given the current catalog of CV error codes When a code is emitted During an exercise Then a localized, patient-friendly message is shown for 100% of known codes and no raw codes are displayed Given an unknown CV error code When emitted Then a generic localized fallback message appears and the unmapped code is logged for diagnostics Given CV messages that include variables (e.g., angle, duration) When rendered Then values are formatted per locale and appropriate units are used Given a severity level for an error When displayed Then the message includes the correct severity icon/color consistently across locales Given new locales are added When building the app Then message mappings load from localization resources without code changes
Translation Source Management
"As a product manager, I want a manageable translation system so that we can keep content consistent and up to date across languages."
Description

Implement a centralized, versioned string catalog with namespaced keys for onboarding and program domains. Support import/export to standard formats (JSON/i18next, XLIFF) for translators, validation for missing/unused keys, and automated fallback to English at build/runtime. Include glossary support for clinical terms, preview builds for QA in each language, and length checks to catch overflow. Provide CI checks to block releases with incomplete critical strings.

Acceptance Criteria
Versioned Namespaced String Catalog for Onboarding and Program
Given the project loads the string catalog schema, When validation runs, Then all keys match the pattern "<domain>.<feature>.<identifier>" where domain ∈ {onboarding, program}. Given a pull request adds/changes/removes any key or value, When CI validation runs, Then the catalog version is incremented semantically and a changelog entry with author, date, and summary is generated. Given duplicate keys exist across files, When validation runs, Then the build fails with a duplicate key error listing file and line. Given the app requests a value by key for a present locale string, When resolved at runtime, Then the string is returned within 5 ms median on mid-tier devices and includes any ICU/plural tokens intact.
Translator Import/Export in JSON (i18next) and XLIFF with Round-Trip Integrity
Given the source catalog in i18next JSON, When exporting to XLIFF, Then all keys, namespaces, and developer notes are preserved, and the exported unit count equals the source key count ± plural variants. Given a translated XLIFF is imported, When round-trip verification runs, Then unchanged segments remain byte-identical, changed segments update only the corresponding keys, and placeholders (e.g., {count}) are preserved. Given an import contains missing or extra placeholders compared to the source, When validation runs, Then the import is rejected with an error listing affected keys and placeholder differences. Given locale-specific plural forms are defined, When exporting/importing, Then plural variants map correctly per CLDR rules for that locale.
Static Analysis for Missing/Unused Keys and CI Gate for Critical Strings
Given application source code references translation keys, When static analysis runs, Then any missing keys are reported with file and line numbers. Given the catalog contains keys not referenced in code, When analysis runs, Then those keys are listed as unused with their namespaces. Given any critical keys (onboarding.* or program.* startup screens) are missing for release locales configured in ci.locales, When the CI gate runs, Then the release build is blocked and the job fails with a summary of missing keys by locale. Given non-critical locales are incomplete, When the CI gate runs, Then a warning is emitted without blocking the build.
English Fallback at Build and Runtime
Given a key is missing in the requested locale at runtime, When the app resolves the string, Then the English value is returned and a single aggregated warning per session is logged with key and locale. Given build-time static content bundling, When a locale is missing a key, Then the build fills the value from English and marks it with a fallback=true metadata flag in the artifact. Given English is also missing for a requested key, When resolution occurs, Then the UI shows the key name bracketed (e.g., [onboarding.welcome.title]) and an error is logged.
Clinical Glossary Enforcement and Context Metadata
Given a glossary of approved clinical terms with per-locale translations, When exporting translation packages, Then glossary terms appear with locked suggestions and context notes for translators. Given an imported translation modifies a locked glossary term, When validation runs, Then the import is rejected with an error citing the term, key, and locale. Given a key has developer context (e.g., character constraints, usage screenshot link), When exporting to XLIFF and JSON, Then the context is included as notes/comments and is visible in translator tools.
Per-Locale Preview Builds and In-App Language Picker for QA
Given QA selects a locale from the in-app language picker, When the app reloads, Then 100% of strings switch to the selected locale without requiring a device language change. Given a preview build is produced for a target locale, When launched offline, Then the app loads that locale from the bundle without network requests and shows its version/locale in the About screen. Given pseudo-localization is enabled, When the app runs, Then all translatable strings display with accent expansion and delimiters, and non-translatable content remains unchanged.
Automated Length and Overflow Checks per Locale
Given per-view character and pixel thresholds are configured, When build-time checks run, Then any strings exceeding thresholds are listed with key, locale, and overflow percentage. Given a rendered string exceeds its container by more than 5% at runtime, When detected, Then the app applies safe wrapping or font scaling within accessibility guidelines and logs an overflow event with key, locale, and device model. Given overflow events are logged, When QA reviews the report, Then aggregated counts are available by key and locale for the last 7 days.
Offline Language Packs
"As a patient with limited connectivity, I want localized content to work offline so that I can complete exercises anywhere."
Description

Cache language bundles on-device to ensure localized onboarding and program instructions work without connectivity. Pre-bundle top locales and lazy-load additional packs from a CDN with integrity verification. Enforce a size budget, delta updates, and automatic rollback on corrupt downloads. Provide transparent fallback to the last known good pack or English if needed.

Acceptance Criteria
Offline onboarding uses cached language pack
Given the device language is Spanish and there is no network connectivity When the user launches the app and reaches onboarding Then onboarding and program instruction strings render in Spanish within 500 ms of view load And no network requests are attempted for localization And telemetry logs "localization_source=cached" with the installed pack version
Pre-bundled top locales available on first launch
Given a clean install with no network connectivity and device language is one of [English, Spanish, French, Portuguese, Chinese] When the app is launched for the first time Then onboarding and program instruction strings render in the device locale And the pack version matches the app bundle manifest And the total size of pre-bundled language assets is ≤ 20 MB
Lazy-load language pack with integrity verification from CDN
Given the user selects a non-installed language while online When the app downloads the language pack from the CDN Then the download occurs over HTTPS from the pinned CDN domain And the pack’s signature and SHA-256 checksum match the manifest And the pack is stored atomically and activated only after verification And if integrity verification fails, the pack is discarded, the UI remains in the prior language, and an error event INTG_FAIL is logged
Delta updates applied with validation and fallback
Given a device has language pack version N installed and the manifest advertises version N+1 When the app checks for updates Then it requests and applies a delta patch if available And the post-patch hash equals the target version hash And total bytes downloaded are at least 60% smaller than the full pack size when a delta is used And if delta application or validation fails, the app retries once with a full pack download and validates again
Automatic rollback on corrupt or incomplete packs
Given a newly downloaded pack was applied but fails validation at runtime or on next launch When the app initializes localization Then it reverts to the last known good pack for that language without user intervention And the app does not crash or display placeholder strings And an audit event "rollback_performed" is recorded with previous and attempted versions And the corrupt pack artifacts are quarantined or deleted in the same session
Transparent fallback to last known good or English when unavailable
Given the selected language pack is not installed and the device is offline When the user opens onboarding or program instructions Then the app uses the last known good pack for that language if available And otherwise falls back to English And the language indicator displays the effective language And no blocking errors or consent dialogs are shown due to missing packs
Size budget enforcement and pack eviction
Given a configured cache budget of 50 MB for language packs When installing or updating a pack would exceed the budget Then the app evicts least-recently-used non-pinned packs until the footprint is ≤ 50 MB And pre-bundled top locales are pinned and never evicted And if the budget cannot be met after eviction, the download is aborted and the current language remains unchanged And the total on-device language pack footprint never exceeds 50 MB
Accessibility and RTL Compliance
"As a patient who relies on accessibility features, I want localized content to be readable and navigable so that I can use the app independently."
Description

Ensure all localized UI is accessible: screen reader labels in the selected language, proper focus order, Dynamic Type support, and sufficient color contrast. Fully support RTL languages with layout mirroring, glyph-appropriate icons, and cursor/caret behavior. Avoid flag icons, ensure native-language labels, and verify line-breaking and hyphenation rules per locale. Include automated and manual accessibility checks per language.

Acceptance Criteria
Screen Reader Localization and Language Picker Endonyms (No Flags)
Given the device language is Spanish (es-ES) and no in-app language is chosen When the user opens the SnapCode and Onboarding screens Then all accessibility labels, hints, and values are announced in Spanish by the screen reader and match on-screen text with zero missing or fallback keys Given the user selects Arabic (ar) from the language picker When returning to any primary screen Then all accessibility labels, hints, and values are announced in Arabic within the same session and after app relaunch Given the language picker is displayed When the list of languages is rendered Then each language is shown by endonym in its native script with no flag icons, and each option exposes an accessible name equal to the endonym and control role selectable Then automated a11y audit reports 0 occurrences of missing accessibilityLabel/Name across the targeted screens for the active locale
RTL Layout Mirroring and Icon Direction Semantics
Given Arabic (ar) is the active language When the user opens SnapCode, Onboarding, Program Instructions, Exercise Player, and Settings Then layouts are mirrored (leading/trailing swapped), navigation back arrow points right, disclosure chevrons point left, horizontal progress flows right-to-left, and page transitions swipe from the right Then directional icons (back/next/chevron) are mirrored, while semantic icons (play, checkmark, camera, heart rate) are not mirrored Then mixed-direction content (numbers, URLs) renders LTR within RTL context with correct bidi isolation and visual alignment
Focus Order and Screen Reader Navigation Consistency
Given any supported language (LTR or RTL) When navigating each primary screen using VoiceOver/TalkBack swipe gestures Then focus order matches visual order (top-to-bottom, leading-to-trailing per writing direction) with no unreachable, hidden, or duplicate-focus elements Given a modal, dialog, or bottom sheet is presented When focus is set Then initial focus lands on the modal title or first actionable control, focus is trapped within the modal until dismissal, and returns to the invoking control on close Then automated a11y scan reports 0 issues for off-screen focus, missing role/name/state, or focusable-but-invisible elements on targeted screens
Dynamic Type, Reflow, and Locale-Appropriate Line Breaking
Given system text size is set to each step from XS through XXXL (including Accessibility sizes where supported) When viewing SnapCode, Onboarding, and Program Instructions in English, Spanish, and Arabic Then all text reflows without clipping or overlap; truncation occurs only with an ellipsis and preserves full control accessibility labels and hints Then tap targets remain at least 44x44pt on iOS and 48x48dp on Android at all sizes Then line breaking and hyphenation follow locale rules: Spanish enables hyphenation; Arabic uses proper RTL wrapping without hyphenation; Latin-script URLs/emails do not hyphenate and are ellipsized when needed Then automated layout tests detect 0 occurrences of clipped text or tap targets below minimum size across tested sizes and locales
Color and Non-Text Contrast Compliance
Given standard and high-contrast system settings When inspecting text, essential icons, and focus indicators across primary screens Then contrast ratios meet WCAG 2.2 AA: 4.5:1 for normal text, 3:1 for large/bold text, and 3:1 for non-text essential graphics and focus outlines in all locales Then information is not conveyed by color alone; an additional cue (icon, text, or pattern) is provided for state changes and errors Then automated contrast checks report 0 violations per screen per locale, and manual spot checks confirm focus indicator visibility on interactive controls
RTL Text Input, Caret, and Selection Behavior
Given Arabic (ar) is active When focusing a standard text field Then text aligns right, the caret starts at the right edge, and navigation/editing gestures move logically for RTL Then selection handles, context menus, and insertion point mirror correctly and do not occlude the text being edited Then mixed-direction fields behave appropriately: phone/OTP/numeric inputs are LTR with left-aligned digits; email/URL inputs default to LTR; user-visible placeholders and labels respect the locale direction Then deleting characters removes the character visually closest to the caret consistent with RTL behavior
Automated and Manual Accessibility Verification per Locale
Given supported locales [en, es, ar, he] When the CI pipeline runs Then automated accessibility suites execute per locale and fail the build on any High-severity issues in categories: missing labels/names, insufficient contrast, offscreen or trapped focus, small tap targets, overlapping/clipped text Given a release candidate build When executing manual a11y checks with VoiceOver (iOS) and TalkBack (Android) Then the checklist covering navigation, forms, dynamic type at XL, RTL mirroring, and text input passes for each locale with evidence (screenshots or recordings) attached to the test run Then all identified issues are tracked with locale tags and resolved or formally accepted with documented rationale before release

Adaptive Safety

Clinician-configured micro‑screeners (1–3 questions) that appear after SnapCode scan when certain protocols require it. Flags red‑risk answers, notifies the clinician, and pauses start until cleared—keeping patients safe without slowing others.

Requirements

Screener Trigger Rules Engine
"As a clinician, I want safety screeners to appear only when a protocol requires them so that at-risk patients are checked without slowing everyone else."
Description

Implements a low-latency decision engine that determines whether a micro-screener (1–3 questions) must be presented immediately after a SnapCode scan based on protocol configuration and patient context. Supports rule conditions such as protocol ID, patient risk flags, time since surgery/injury, prior screener outcomes, and clinician overrides. Defaults to bypass when no rule applies to avoid impacting non-gated flows. Ensures response in under 200 ms, logs decisions for auditability, and gracefully degrades if metadata cannot be fetched (e.g., temporary network issues) by failing safe according to clinic policy. Integrates with existing protocol metadata, the scan event pipeline, and the session start flow.

Acceptance Criteria
Require screener on protocol and risk-flag match
Given a SnapCode scan for protocolId "P-123" with a configured rule "require screener when riskFlag=DVT" And the patient context has riskFlag DVT=true When the rules engine evaluates the scan event Then decision=REQUIRE_SCREENER with reason=RULE_MATCH and a non-empty ruleId is returned And the decision is attached to the scan event payload and delivered to the session start flow And the session start flow is paused until screener outcome=Cleared
Trigger based on time-since-event and prior screener outcome
Given a rule "require screener when timeSinceSurgery < 14 days" And the patient surgeryDate is 10 days ago in patientTimeZone UTC-5 When the rules engine evaluates at currentTime UTC Then decision=REQUIRE_SCREENER with reason=RULE_MATCH and computed timeSinceSurgery=10 days using patientTimeZone Given a rule "require screener when priorOutcome in [AMBER, RED] within 7 days" And the most recent screener outcome is AMBER at T-3 days When the rules engine evaluates a new scan Then decision=REQUIRE_SCREENER with reason=RULE_MATCH and priorOutcomeWindowSatisfied=true
Bypass when no rule matches
Given a SnapCode scan for protocolId "P-999" And no configured rules apply to the patient context When the rules engine evaluates the scan event Then decision=BYPASS with reason=NO_MATCH is returned And the session start flow proceeds without presenting a screener And a decision log entry exists with outcome=BYPASS and reason=NO_MATCH
Clinician override precedence
Given a clinician override "force screener" exists for patientId X and protocolId Y When the rules engine evaluates a scan for patientId X and protocolId Y Then decision=REQUIRE_SCREENER with reason=CLINICIAN_OVERRIDE is returned regardless of other rules And the decision log marks overrideApplied=true and includes overrideId Given a clinician override "bypass screener" exists for patientId X and protocolId Y When the rules engine evaluates a scan for patientId X and protocolId Y Then decision=BYPASS with reason=CLINICIAN_OVERRIDE is returned even if a rule would require a screener
Performance: decision latency under 200 ms
Given production-like conditions (≥200 RPS, cold start permitted) When 10,000 consecutive decision evaluations are executed Then the 95th percentile end-to-end latency (request received to response sent) is ≤ 200 ms And no single decision exceeds 500 ms And results remain functionally correct (no increase in NO_MATCH vs control for identical inputs)
Decision audit logging
Given any decision evaluation completes When the log sink is queried for that decision Then an immutable record exists containing fields: decisionId, timestamp (UTC), scanId, patientId, protocolId, matchedRuleIds (may be empty), decision (REQUIRE_SCREENER|BYPASS), reasonCode, clinicPolicy, metadataVersion, latencyMs And the record is available via the audit API within 5 minutes of the decision time And the record can be correlated to the originating scan event by scanId
Graceful degradation with clinic fail-safe policy
Given the protocol metadata service times out at 300 ms for a clinic with clinicPolicy=FAIL_OPEN When the rules engine evaluates a scan event Then decision=BYPASS with reason=DEGRADED_METADATA_UNAVAILABLE and policy=FAIL_OPEN is returned within 200 ms And a warning log is written with degradation=true Given the protocol metadata service times out at 300 ms for a clinic with clinicPolicy=FAIL_CLOSED When the rules engine evaluates a scan event Then decision=REQUIRE_SCREENER with reason=DEGRADED_METADATA_UNAVAILABLE and policy=FAIL_CLOSED is returned within 200 ms And a warning log is written with degradation=true
Micro‑Screener Builder
"As a clinician admin, I want to configure brief safety screeners for specific protocols so that the screening aligns with my clinic’s standards and risks."
Description

Provides a clinician-facing configuration UI to create and manage micro-screeners per protocol with 1–3 questions. Supports question types (yes/no, single-choice, numeric), required/optional flags, localized text, tooltips, and red-risk conditions (e.g., specific answers or thresholds). Includes preview mode, validation (max 3 questions, at least one red-risk rule), versioning with effective dates, and assignment to one or multiple protocols. Stores definitions as versioned metadata consumable by the runtime trigger engine and risk evaluator.

Acceptance Criteria
Create screener with 1–3 questions and core validations
- Given a new micro-screener, When the clinician attempts to add a 4th question, Then the UI prevents the action, shows "Maximum 3 questions", and Save remains disabled. - Given a new micro-screener, When Save is clicked with 0 questions, Then Save is blocked and an inline error "At least 1 question is required" is shown. - Given a question row with missing text or type, When Save is clicked, Then Save is blocked and inline errors identify the missing fields on that row. - Given a micro-screener containing 1–3 fully defined questions, When Save is clicked, Then the screener is saved successfully and appears in the list of screeners with name, ID, and last-updated timestamp.
Question types and required/optional flags
- Given a Yes/No question type, When previewed, Then two options Yes and No render and a single selection can be made. - Given a Single-choice question type, When fewer than 2 options are defined, Then Save is blocked with "At least 2 options required". - Given a Numeric question type, When a non-numeric value is entered in preview, Then an inline validation error is shown and the value is rejected. - Given a question marked Required, When preview submission is attempted without answering it, Then submission is blocked and "This question is required" is shown. - Given a question marked Optional, When preview submission is attempted without answering it, Then submission succeeds.
Red-risk rules definition and validation
- Given a micro-screener with no red-risk rule configured on any question, When Save is clicked, Then Save is blocked with "At least one red-risk rule is required". - Given a Yes/No question, When "Yes" is set as red-risk, Then the rule is saved and listed in the rules summary for that question. - Given a Single-choice question, When one or more options are marked as red-risk, Then those options are saved as red-risk triggers. - Given a Numeric question, When a comparison operator (>, >=, <, <=, =) and threshold value are set, Then the red-risk rule is saved and evaluated in preview. - Given preview mode, When an answer meets a red-risk condition, Then a red-risk indicator is displayed for the screener.
Localization and tooltips
- Given localized text is entered for multiple locales, When the preview locale is switched, Then question text and answer labels render in the selected locale. - Given a translation is missing for a field in the selected locale, When previewed, Then the field falls back to the default locale text. - Given tooltip text is provided for a question, When the info icon is activated in preview, Then the tooltip content is displayed. - Given no tooltip text is provided, When previewed, Then no info icon appears for that question.
Preview mode parity with runtime rendering
- Given a configured micro-screener, When Preview is opened, Then question order, input controls, required indicators, and validation behaviors match those defined in the builder configuration. - Given answers are modified in preview, When a response changes from non-risk to red-risk and back, Then the red-risk indicator updates in real time to reflect the current state. - Given the preview locale is changed, When switched, Then content updates without page reload and previously entered compatible answers are preserved.
Versioning with effective dates
- Given a saved screener version 1, When a new version is created from it, Then version 2 is created linked to the same screenerId and prepopulated with version 1 content for editing. - Given an effective date is entered for a version, When saved, Then the version stores an ISO-8601 effectiveDate with timezone. - Given multiple versions exist with different effective dates, When the metadata is queried with a reference time T, Then the active version is the highest version whose effectiveDate is ≤ T. - Given an attempt is made to save two versions of the same screener with the same effective date, When saving the second, Then Save is blocked with "Effective date must be unique per screener".
Protocol assignment and metadata export
- Given a screener is assigned to multiple protocol IDs, When saved, Then the assignment persists and the screener is associated with all listed protocols. - Given an attempt to assign the same protocol twice, When saving, Then the duplicate is prevented and only one association is stored. - Given a saved screener, When retrieving its definition via the internal metadata API, Then the payload includes screenerId, version, effectiveDate, questions[], redRiskRules, localizedTexts, and assignedProtocolIds. - Given a screener is unassigned from a protocol and saved, When retrieving metadata, Then assignedProtocolIds no longer includes that protocol ID.
Real‑time Risk Evaluation & Start Gate
"As a patient, I want the app to pause the session when an answer indicates risk so that I don’t perform exercises that could harm me."
Description

Evaluates screener responses client-side for immediate feedback and server-side for authoritative risk classification. When a red-risk condition is met, the system blocks session start, displays a clear safety message and next steps, and creates a "Paused—Needs Clinician Clearance" state tied to the patient-protocol. Supports clinician override/clearance with reason codes from the dashboard, persists pause status across sessions/devices, and unlocks access once cleared. Provides safe alternative recommendations if configured by the clinician.

Acceptance Criteria
Client-side immediate risk feedback on screener submit
Given a screener with at least one red‑risk configured response When the patient taps Submit Then the app evaluates responses locally and within 300 ms disables Start Session, shows the configured safety message with Next Steps, and prevents navigation into the exercise session And the client sends the response payload to the server within 100 ms of submit And if no local red‑risk is detected, Start Session remains disabled until the server responds Green/Amber; upon Green/Amber, Start Session enables within 100 ms of receipt And if local evaluation flags red but the server returns Green within 2 seconds, the UI removes the local warning and enables Start Session within 100 ms of the server response
Server classification creates and returns Pause state
Given the server receives screener responses for a patient‑protocol When it classifies the result as Red Then it creates a Pause record with state "Paused—Needs Clinician Clearance" tied to the patient‑protocol, including timestamp (UTC), risk reason code, screener version, and response snapshot per data policy And it returns an HTTP 200 with body including risk: Red, pauseId, message, and any configured alternatives within 1 second And subsequent Start Gate API queries for that patient‑protocol return status Paused until cleared
Start gate enforcement across sessions and devices
Given a patient‑protocol in Paused state When the patient scans the SnapCode or opens the protocol on any device Then the UI displays the safety message and Next Steps and Start Session is disabled And any start attempt is rejected by the server with HTTP 423 Locked and error START_PAUSED And the paused status persists across app restarts and across devices for the same account
Clinician notification and clearance with reason codes
Given a Red classification and Pause creation When the event is recorded Then the assigned clinician receives an in‑app notification immediately and an email within 1 minute containing patient, protocol, risk reason, and a dashboard link And when the clinician opens the case and selects Clear with a required reason code (and optional notes) Then the system records clinician id, timestamp, reason code, and notes in the audit trail and transitions state from Paused to Cleared And upon clearance, the patient receives a push notification within 1 minute and the next attempt sees Start Session enabled
Safe alternative recommendations on block
Given the protocol has configured safe alternatives for Red outcomes When a Red classification blocks start Then the patient UI displays up to 3 clinician‑configured alternatives with titles and brief instructions And selecting an alternative starts the alternative flow and does not grant access to the blocked protocol session And the selection event is logged with patient id, alternative id, and timestamp and is sent to the server within 500 ms And if no alternatives are configured, only the safety message and contact‑clinician Next Steps are shown
Audit trail for pause lifecycle and blocked attempts
Given any pause lifecycle event (Created, Cleared, Override Denied) When the event occurs Then an immutable audit record is written with patient id, protocol id, actor (system/clinician), action, previous state, new state, reason code (if any), timestamp (UTC), and request id And the record is retrievable via audit API and visible in the clinician dashboard And all start attempts during Paused state are logged with outcome Blocked and error START_PAUSED
Red‑Risk Clinician Alerts
"As a clinician, I want actionable alerts when a patient fails a safety screen so that I can quickly review and clear or modify their plan."
Description

Delivers immediate, HIPAA-compliant notifications to assigned clinicians when a red-risk screener response is submitted. Supports in-app, push, and email channels with secure deep links to the patient’s record and clearance action. Includes alert throttling, quiet hours, escalation rules if uncleared after a configurable window, and delivery/read receipts. Captures responses and actions for auditing and follow-up.

Acceptance Criteria
Immediate in-app alert on red-risk screener submission
Given a patient completes an Adaptive Safety screener and selects a red-risk answer When the response is submitted Then the assigned clinician(s) receive an in-app alert within 5 seconds of submission And the alert displays patient first name and last initial, protocol name, risk level, and timestamp And the alert contains a secure deep link that opens the patient's record at the clearance action screen after successful authentication And the alert is visible only to currently assigned clinician(s) with permission to the patient And the patient's session shows status "Paused — awaiting clinician clearance"
Push notification with secure deep link and HIPAA-safe payload
Given the clinician has push notifications enabled and an active device token And the clinician is not within quiet hours When a red-risk screener response is submitted for their assigned patient Then a push notification is sent within 5 seconds And the push payload contains no patient identifiers or PHI (generic title/body only) And tapping the notification launches MoveMate, enforces authentication if required, and navigates directly to the patient's clearance screen via deep link And the system records notification attempt, delivery acknowledgment from APNs/FCM, and open event if tapped
Email alert with masked content and expiring secure link
Given the clinician has a verified email and has opted into email alerts When a red-risk screener response is submitted Then an email alert is sent within 30 seconds And the email subject and body contain no PHI (e.g., "MoveMate red‑risk alert requires review") And the email includes a single-use deep link that expires in 15 minutes and requires authentication before showing patient data And SMTP acceptance (250) and any bounce codes are logged; open tracking is recorded only if the organization has enabled it, else stored as unknown
Alert throttling and quiet hours enforcement
Given multiple red-risk responses are submitted for the same patient within a 10-minute throttle window When alerts would be generated Then no more than one alert per clinician is sent during the window, and subsequent alerts are aggregated into the existing alert with an incremented count and latest timestamp And during clinician-defined quiet hours (per clinician time zone), push/email alerts are suppressed and queued, while the in-app inbox records the alert immediately And queued alerts are delivered automatically at the end of quiet hours with a summary of aggregated events And all throttling and quiet-hours decisions are logged with reason codes
SLA-based escalation to backup/on-call clinician
Given a red-risk alert remains uncleared for 15 minutes (configurable SLA) When the SLA elapses Then the system escalates by notifying the designated backup/on-call clinician(s) via in-app and available channels, respecting their quiet hours and throttling rules And escalation repeats up to 3 times with exponential backoff (e.g., 15m, 30m, 60m) until the alert is cleared or acknowledged And escalation stops immediately when any authorized clinician clears the alert or explicitly acknowledges responsibility And all escalation attempts, recipients, timestamps, and outcomes are recorded
Per-channel delivery and read receipts
Given a red-risk alert is generated When notifications are sent Then the system records per-channel timestamps for attempted, sent, delivered (APNs/FCM ack or SMTP 250), and read/open (in-app view, push open, email open if enabled) And the alert record shows the latest "delivered" and "read" statuses with the clinician and timestamp And failures are logged with provider error codes, and retries occur per channel policy (e.g., push 2 retries within 1 minute; email 1 retry within 5 minutes)
Immutable audit trail of responses and clinician actions
Given a red-risk screener response and subsequent clinician activity When the clinician views, comments, acknowledges, or clears the alert Then the system writes immutable audit entries capturing user ID, role, action, timestamp, device/app version, patient ID, screener ID, and prior state And the screener questions, selected answers, computed risk level, and clinician clearance notes are stored and linked to the alert And audit entries are filterable by patient, clinician, date range and exportable to CSV; retention follows the organization's policy and is non-deletable by end users And any edits create new versions with a complete version history; hard deletes are disabled for audit records
Screener Audit & Compliance Logging
"As a compliance officer, I want complete, immutable records of safety screenings and clearances so that we can demonstrate due diligence and meet regulatory requirements."
Description

Captures immutable logs for each screener event, including screener version, questions shown, patient responses, risk determination, timestamps, device/app version, clinician notifications sent, and clearance/override actions with user identity and reasons. Stores records with encryption in transit and at rest, adheres to HIPAA retention policies, supports role-based access, and enables export (CSV/PDF) for compliance reviews and incident investigations.

Acceptance Criteria
Immutable Screener Event Logging
Given a screener is triggered for a patient session When any screener event (displayed, answered, completed, aborted) occurs Then the system writes an append-only audit record within 3 seconds and returns a success code And the record includes a unique event_id and a cryptographic chain hash linking to the prior event for that session And any attempt to modify or delete an existing audit record is rejected and the attempt is itself logged with user identity, timestamp, and reason
Complete Field Capture per Screener Event
Given a screener is shown to a patient When the event is logged Then the record contains: screener_id, screener_version, protocol_id (SnapCode), question_ids and question_text shown, patient responses (question_id, response_value, response_timestamp), risk_determination (level, rule_version, rule_id), timestamps (displayed_at, submitted_at, logged_at) in ISO-8601 UTC, device_model, os_version, app_version, session_id, clinician_id (if applicable), locale, and network_type And all required fields pass schema validation; optional fields are present as empty collections rather than null; timestamps are monotonic (displayed_at ≤ submitted_at ≤ logged_at) And records failing validation are not persisted and a structured error is emitted and audit-logged
Red-Risk Flagging and Start-Pause Auditability
Given a patient submits screener answers When the risk engine determines risk_level = red Then the audit record flags risk_level = red and start_status = paused with pause_reason and rule_id And clinician notifications are logged with channel, recipients, message_id, send_result, and notification_timestamp And exercise start is blocked until a clearance action record exists for the same session_id
Clearance/Override Action Traceability
Given a session is paused due to red risk When a clinician performs a clearance or override Then an audit record is created with action_type (clear/override/deny), user_id, user_role, auth_method, mfa_present (true/false), reason_code, reason_text, timestamp_utc, device_info, and scope (session_id/protocol_id) And the system requires an explicit confirmation step; the confirmation event is logged and linked via correlation_id And the action outcome is immutable and any subsequent reversal requires a new action record with a new reason
Encryption and Retention Policy Enforcement
Given audit data is transmitted or stored When data is in transit Then TLS 1.2+ is enforced; non-TLS requests are rejected and logged When data is at rest Then AES-256 encryption is used with managed keys and rotation at least every 365 days; encryption status is verifiable via automated checks And a configurable retention policy (default ≥ 6 years) prevents deletion before expiry; post-expiry purges create a purge record with purge_id, actor, timestamp, and record_count; legal holds suspend purges until released
Role-Based Access Control for Audit Logs
Given defined roles (Clinician, Clinic Admin, Compliance Officer, Support, Patient) When a user requests to view or export audit logs Then access is allowed only per RBAC rules: Clinicians may view logs for their assigned patients; Admin and Compliance may view/export across the clinic; Support may view metadata with PHI fields redacted; Patients have no access And unauthorized requests return 403 without disclosing record existence and are themselves audit-logged with user_id, ip_address, and timestamp And all successful access and exports require selection of an access purpose, which is stored alongside the access record
Export and Evidence Package Generation
Given a permitted user initiates an audit export for compliance or incident review When filters (date_range, patient_id, clinician_id, risk_level, event_type) are applied Then the system generates CSV and PDF outputs including all required fields and a header with export_id, generated_at_utc, record_count, and SHA-256 checksum of the content And exports up to 100,000 records are delivered within 60 seconds; larger exports are processed asynchronously with progress status and completion notification within 10 minutes And a “redacted” option excludes PHI-designated fields; CSV is UTF-8 with proper escaping; PDF renders question text/responses legibly with red-risk highlights
Low‑Friction Patient UX & Accessibility
"As a patient, I want quick, accessible safety questions that are easy to answer so that I can start safe exercises without frustration."
Description

Optimizes the micro-screener experience to be fast and accessible: sub-300 ms load time target, large touch targets, screen reader compatibility, high-contrast mode, and multi-language support. Displays clear progress (1–3), concise privacy copy, and error handling. Provides offline behavior: if a required screener cannot be retrieved, informs the patient and safely blocks start with guidance to contact their clinician; caches previously assigned screeners where allowed.

Acceptance Criteria
P95 Sub‑300ms Micro‑Screener Load
Given a patient scans a valid SnapCode and an assigned micro‑screener is available online When the micro‑screener view is requested Then the first question becomes interactive within 300 ms at P95 across supported devices over at least 200 sessions And the initial render success rate is >= 99.5% without hard errors And performance telemetry records time‑to‑interactive and load outcome for each session
Accessible Touch Targets
Rule: All tappable controls (answer options, Next, Back, Submit, Close) have a minimum hit area of 48x48 dp (Android) or 44x44 pt (iOS) Rule: Adjacent tappable elements maintain >= 8 dp/pt spacing Rule: No control experiences overlap or clipping in portrait or landscape on the smallest supported viewport (≥320 pt width) Rule: Touch interactions provide visible pressed/focus states and are keyboard/focus accessible
Screen Reader & High‑Contrast Accessibility Compliance
Given a screen reader (VoiceOver/TalkBack) is enabled When the micro‑screener loads Then initial focus lands on the screen title and announces the context And all interactive elements expose correct accessible name, role, and state And swipe/focus order matches visual order without traps And inline errors and progress changes are announced via an ARIA live region Given device high‑contrast/increase contrast is enabled or in‑app high‑contrast mode is on Then text contrast is >= 4.5:1 (normal) and >= 3:1 (large text/icons) And interactive states and required indicators are distinguishable without color alone
Multi‑Language Support & Localization
Given a patient preferred language L is set by the clinic or device When the micro‑screener loads Then all UI strings (questions, options, buttons, privacy, errors) appear in L And if a translation is missing, the string falls back to English and a language switcher is available And text expansion up to +30% causes no truncation/overlap on the smallest supported screen And RTL languages render mirrored layout and correct reading order And locale‑appropriate number/date formatting is applied where shown
Progress Indicator & Privacy Copy
Given a micro‑screener contains N questions where 1 ≤ N ≤ 3 When viewing question i Then a progress label displays “Step i of N” and updates immediately on navigation And the progress label is exposed with a meaningful, concise announcement to assistive technologies And a concise privacy line is displayed with a link to the full policy And opening the privacy link does not lose current screener state on return
Offline Retrieval & Caching with Safe Block
Given the required micro‑screener cannot be retrieved (offline, timeout, or 5xx) When the patient attempts to start the protocol Then the app displays a clear message that the screener is required and start is blocked until resolved And guidance to contact the clinician and a Retry action are provided And the Start action remains disabled until retrieval succeeds Given a previously assigned screener is cached and reuse is permitted by policy (same version, within cache TTL) When offline Then the cached screener loads and can be completed, with responses queued for sync when online And if cache is invalid, the safe block behavior is shown
Friendly Error Handling & Validation
Given a required question is unanswered When the patient taps Next or Submit Then an inline, human‑readable error appears adjacent to the field and is announced to assistive tech And Next/Submit remains disabled until the error is resolved Given a submission error occurs (network/API) When the patient retries Then answers remain preserved, a non‑technical message is shown, and duplicate submissions are prevented And all error events are logged with non‑PII diagnostics; no crashes occur during simulated failures
Safety Analytics & Tuning
"As a clinic lead, I want analytics on safety screeners and the ability to tune them so that we reduce unnecessary pauses while catching true risks."
Description

Offers a dashboard with metrics such as screener presentation rate, completion rate, red-risk rate by protocol, time-to-clear, and override frequency (proxy for false positives). Enables controlled tuning of risk thresholds and question wording via versions or A/B tests, with guardrails to maintain 1–3 question length. Provides exports and filters by clinician, protocol, and timeframe to inform continuous improvement without increasing patient friction.

Acceptance Criteria
View Safety Metrics by Protocol and Timeframe
Given a logged-in user with access to Safety Analytics and a selected protocol P and timeframe T When the dashboard loads or filters change Then the cards display Screener Presentation Rate, Completion Rate, Red-Risk Rate (by protocol), Time-to-Clear (median and P95), and Override Frequency And each metric shows numerator and denominator on hover and a last-refreshed timestamp And values are calculated server-side and rounded (percentages to 1 decimal; time in minutes to 1 decimal) And when the denominator equals 0 the metric shows "No data" state And all charts and tables reflect the selected protocol P and timeframe T
Filter by Clinician/Protocol/Timeframe and Export CSV
Given the user selects clinician C, protocol P, and timeframe T filters When Apply is clicked Then all metrics, charts, and tables update to reflect C, P, and T within 2 seconds at P95 for datasets up to 100k screeners And applied filters are displayed as removable chips And when Export CSV is clicked a file downloads within 5 seconds containing only rows within C, P, and T And the CSV includes a header row with export_time_utc, filters_applied, and schema_version And timestamps are ISO 8601 UTC and numeric fields are unformatted
Compute Red-Risk, Time-to-Clear, and Presentation/Completion Rates
Given screener and session events exist for protocol P within timeframe T When metrics are computed Then Screener Presentation Rate = screeners_shown / eligible_sessions within T And Completion Rate = screeners_submitted / screeners_shown within T And Red-Risk Rate = red_risk_screeners / screeners_submitted within T And Time-to-Clear is measured from red-risk timestamp to clinician clear or override event; unresolved flags are excluded from median/P95 and counted separately as "Unresolved" And Override Frequency = overrides / red_risk_flags within T And metrics are computed after excluding test data and soft-deleted records
Versioned Risk Threshold Tuning with Guardrails
Given an Admin opens the Risk Threshold Tuning page for protocol P When the Admin creates a new version Then version_name and change_summary are required fields And guardrails prevent configurations that produce more than 3 or fewer than 1 questions And publishing applies the new version only to new sessions; in-flight sessions remain on their current version And an immutable audit entry is recorded with user_id, timestamp_utc, protocol_id, from_version, to_version, and diff And the Admin can roll back to a prior version; rollback creates a new version that references the restored version
A/B Test Screener Wording Within 1–3 Questions
Given an Admin configures an A/B test for screener wording on protocol P When variants A and B are saved Then each variant is validated to contain 1–3 questions; saves are blocked if limits are exceeded And randomization assigns patients to A or B 50/50 by patient_id and remains stable for 30 days And variant-level metrics (presentation rate, completion rate, red-risk rate, median/P95 time-to-clear) display only after each arm has ≥50 submitted screeners within T And the Admin can stop the test and must select the default variant upon stop And all test lifecycle actions (create, start, stop, default select) are recorded in the audit log
Override Frequency Tracking and Trend Visualization
Given red-risk flags and clinician override events exist within timeframe T When the user views Override Frequency for protocol P Then the dashboard displays the current rate and a 30-day trend line by protocol And selecting a data point opens a case list showing patient_id, session_id, protocol_id, flag_time_utc, override_time_utc, and override_reason if provided And the case list can be filtered by clinician and exported to CSV with the displayed columns
Role-Based Access and Audit Logging for Tuning
Given role-based access is configured with Clinician and Admin roles When a Clinician uses Safety Analytics Then they can view metrics and export CSV but cannot create or modify versions or A/B tests And when an Admin publishes, rolls back, or edits thresholds or starts/stops an A/B test Then an audit record is captured with user_id, action, protocol_id, entity_id, diff, timestamp_utc, and reason And the Audit Log view allows filtering by user, protocol, action, and timeframe and supports CSV export

Kiosk FastPass

Front‑desk mode that generates appointment‑bound QR codes for same‑day use. Patients scan once to auto-load the correct program, while staff avoid manual entry and misassignment—ideal for busy small clinics.

Requirements

Secure Same‑Day Appointment QR Tokens
"As a front‑desk coordinator, I want to generate a secure, same‑day QR for each appointment so that patients can quickly access the right program without exposing PHI or risking misuse."
Description

Generate cryptographically signed, appointment‑bound QR codes that expire at end‑of‑day (configurable window) and are scoped to clinic, patient, appointment, and prescribed program. Tokens contain no plaintext PHI, support one‑time or limited multi‑use scans, can be revoked, and are validated server‑side to prevent reuse and tampering. Integrates with MoveMate’s scheduler (and connected EHR calendars) to pull the correct appointment/program mapping and enforces timebox rules (e.g., 30‑minute early/late grace). Includes rate limiting, replay protection, and environment‑specific signing keys to maintain HIPAA‑aligned security and reliability.

Acceptance Criteria
Generate Signed, PHI‑Free, Same‑Day QR Token at Front Desk
Given a front‑desk staff member at clinic C selects today’s appointment A for patient P And system config sets token expiration to the end of clinic day (or a specified override) When the staff generates a Kiosk FastPass QR token Then the token payload includes only opaque identifiers (clinic_id, appointment_id, program_id, jti) and standard claims (iat, nbf, exp, aud) And the token contains no plaintext PHI (e.g., no name, email, phone, DOB, MRN) And the token is signed with the environment‑specific private key and includes a kid header And the QR encodes only the signed token or a short URL to it, with no additional PHI in the QR data
Timebox and Expiration Enforcement (Same‑Day + 30‑Minute Grace)
Given a token bound to appointment time T in clinic local time And a grace window of 30 minutes early and 30 minutes late When the token is scanned between T−30m and T+30m and before its exp Then validation succeeds When the token is scanned outside that window or after the same‑day exp Then validation fails with error code outside_timebox or expired And no program is loaded And server applies a clock‑skew tolerance of up to 2 minutes
Server‑Side Validation, Tamper Detection, and Replay Protection
Given any presented token string When the signature does not validate against the active or grace‑period public keys for the environment Then the request is rejected with error invalid_signature and a 401 status When mandatory claims (aud, clinic_id, appointment_id, program_id, jti, iat, exp) are missing or malformed Then the request is rejected with error invalid_claims and a 400 status When a previously accepted jti is presented again beyond its allowed uses Then the request is rejected with error replay_detected and a 409 status And all validation occurs server‑side; the client is never trusted for acceptance decisions And failed validations are rate‑limited to a maximum of 5 invalid attempts per IP/device per minute; excess attempts receive 429 for 10 minutes
One‑Time vs Limited Multi‑Use Scan Controls
Given a token policy of one_time When the first valid scan occurs Then the token is marked consumed atomically and subsequent scans are rejected with error consumed Given a token policy of limited_multi_use with max_uses = N When scans occur within validity window Then up to N scans are accepted and the (use_count) is incremented atomically per scan; the (N+1)th scan is rejected with error use_limit_reached And counters persist across service restarts and are resilient to race conditions And the default policy for kiosk‑issued tokens is one_time unless explicitly overridden by clinic configuration
Revocation by Staff and Immediate Propagation
Given an active token for appointment A When staff revokes the token from the Kiosk FastPass console or via EHR‑linked action Then any subsequent scan is rejected within 60 seconds with error revoked And the audit log records revoker identity, timestamp, token jti, appointment_id, and reason And revocation cannot be undone; a new token must be issued to restore access
Scheduler/EHR Mapping Loads Correct Program
Given appointment A in MoveMate’s scheduler (or connected EHR) is mapped to prescribed program G for patient P When a valid token for A is scanned Then the patient session auto‑loads program G under clinic C context without manual selection When multiple active programs exist for A Then the server selects the most recent active program per clinic rules and records the selection in the audit log When no program mapping exists or the mapping is inactive Then validation fails with error no_program and staff are notified via dashboard alert/webhook; no program is loaded
Environment and Clinic Scoping with Key Rotation
Given tokens include an aud/env claim and clinic_id claim When a token issued in environment E is presented to environment E′ ≠ E Then it is rejected with error wrong_environment When a token for clinic C is scanned at clinic C′ ≠ C Then it is rejected with error wrong_clinic And key rotation is supported: if kid references a previous key within the configured rotation grace window, validation passes; outside the window, validation fails with error key_not_active And no cross‑tenant data is exposed in any response body or error message
Front‑Desk Kiosk Mode Interface
"As a clinic receptionist, I want a simple kiosk screen to generate and manage QR passes so that I can move patients through check‑in quickly without manual data entry."
Description

Provide a dedicated, PIN‑protected front‑desk interface that lists today’s appointments, supports quick search, and enables one‑tap QR generation per appointment. The UI displays the QR on screen, offers print and digital send options, and shows token status (active/expired/revoked). Designed for tablets and desktops with large‑touch targets, idle timeout, and minimal PHI display. Supports multi‑location clinics, theming with clinic branding, and role‑based access so only authorized staff can issue or revoke codes. Integrates with the existing MoveMate staff dashboard and uses the same authentication/SSO flows.

Acceptance Criteria
PIN-Protected Kiosk Access and Session Control
Given I am an authenticated staff user via existing SSO And I have a configured 6-digit kiosk PIN When I launch Kiosk Mode from the staff dashboard Then I see a branded PIN lock screen with no PHI displayed And entering the correct PIN unlocks the kiosk within 1 second And after 120 seconds of inactivity the kiosk auto-locks and clears all on-screen data And after 5 consecutive incorrect PIN attempts the kiosk is locked for 5 minutes and requires SSO re-authentication And all lock/unlock and failed PIN attempts are audit-logged with timestamp and staff ID
Today’s Appointments List with Multi-Location Filter and Quick Search
Given the kiosk is unlocked When I open Today’s Appointments Then the list defaults to my assigned location for today’s date and loads within 2 seconds for up to 5,000 appointments And each entry shows only minimal PHI: FirstName + LastInitial, appointment time, provider initials, and location And no DOB, full last name, phone, or email is shown on the list view When I change the location filter Then the list updates to that location within 1 second When I search by patient name, appointment ID, or MRN (partial match, case-insensitive) Then results return within 500 ms for up to 5,000 records and “No results” is shown when none match And clearing search restores the full filtered list
One-Tap QR Generation per Appointment
Given an appointment is visible in the list and I have permission to issue codes When I tap Generate QR on that appointment Then a unique, appointment-bound token is created within 2 seconds And the token is valid only on the appointment date and expires automatically at 11:59 PM local time or upon manual revoke, whichever occurs first And there is at most one active token per appointment; issuing a new token automatically revokes any prior active token And an audit log entry is created with staff ID, appointment ID, token ID (hashed), and timestamp
On-Screen QR Display, Print, and Digital Send
Given a token has been generated for an appointment When the QR is displayed Then it renders at least 256x256 px with error-correction suitable for arm-length scan and passes contrast checks on the applied theme And a countdown or status label indicates Active until [expiry time] When I select Print Then the system print dialog opens with a template that includes clinic name/logo, QR code, and patient FirstName only, and excludes any additional PHI When I select Send via SMS or Email Then the dialog shows the verified contact(s) on file masked (e.g., ***-***-1234, j***@example.com) And sending succeeds or fails with an explicit status and retry option within 5 seconds And all send/print actions are audit-logged And a Hide QR action immediately clears the QR from the screen
Token Status Management: Active, Expired, Revoked
Given an appointment with a generated token Then the appointment row shows a status chip as Active, Expired, or Revoked When local time passes 11:59 PM on the appointment date Then the token auto-transitions to Expired and the status chip updates within 60 seconds without manual refresh When a staff member selects Revoke Then the token transitions to Revoked immediately, becomes unusable, and the action is audit-logged When a new token is issued for the same appointment Then any previous token is marked Revoked and only the latest token scans as valid
Role-Based Permissions for Issue and Revoke
Given I am signed in via SSO When my role is FrontDesk or Admin Then Issue QR and Revoke controls are enabled When my role is Clinician or any role without kiosk:issue scope Then Issue QR and Revoke controls are disabled with a tooltip explaining insufficient permissions And any attempt to invoke restricted actions via URL or API returns 403 and is audit-logged And roles and scopes are sourced from the existing MoveMate staff dashboard identity provider
Clinic Branding and Large-Touch UI Compliance
Given the clinic has provided a logo and brand colors in the staff dashboard When Kiosk Mode loads Then the header shows the clinic logo and primary color theme And all text meets WCAG AA contrast (≥4.5:1) and QR code contrast is not reduced by theming And primary interactive touch targets (buttons, rows, filters) are at least 48 logical pixels on the smallest supported device And visual focus states and pressed states are visible for all interactive elements And theming never introduces PHI beyond what the screen design allows
Scan‑to‑Program Auto‑Loader
"As a patient, I want to scan one code and land directly in my prescribed program so that I can start exercises immediately without navigating menus or logging in."
Description

Enable patients to scan the QR with their phone to deep‑link into MoveMate and auto‑load the exact exercise program for the bound appointment. If the app is not installed, open a responsive web experience with the same program. Perform lightweight verification (e.g., confirm initials and birth month) when risk signals are detected. Handle expired or revoked tokens gracefully with reissue prompts for staff. Support iOS Universal Links and Android App Links, camera permission handling, and a fallback short code entry. On success, prefetch media for the first exercises and present a one‑tap Start to reduce friction and increase adherence.

Acceptance Criteria
Installed App Deep Link Loads Correct Program
Given a valid same-day, appointment-bound QR token And MoveMate is installed on the device When the patient scans the QR and the OS resolves the Universal/App Link Then MoveMate opens directly to the Program Overview screen for the bound appointment And the displayed Program ID matches the token payload And exercise list order and counts match the clinician-assigned program And the patient identifier displayed matches the appointment patient And a One-Tap Start CTA is visible and enabled And media for the first 3 exercises begin prefetching within 1 second And time from scan to Program Overview is ≤ 2.5 seconds on a 4G connection And an analytics event "program_autoload_success" with token_id and platform is recorded
No App Installed – Web Experience Parity
Given a valid same-day, appointment-bound QR token And MoveMate is not installed on the device When the patient scans the QR Then a responsive web experience loads at the program route And exercise order, dosage, cues, and media match the native program exactly And a One-Tap Start CTA is visible and enabled And sign-in is not required to view or start the program And an "Install MoveMate" prompt is present but non-blocking And the session persists for at least 30 minutes of inactivity And time to first content render is ≤ 3 seconds on a 4G connection And an analytics event "program_web_autoload" with token_id and user_agent is recorded
Lightweight Patient Verification on Risk Signal
Given risk signals are detected for the QR token (e.g., token reuse on >1 device within 10 minutes, geo-distance >50 km from clinic, or staff-flagged high-risk) When the patient attempts to proceed to the Program Overview Then display a verification sheet requesting patient initials (2 letters) and birth month (MM or full month name) And inputs are case-insensitive and trimmed And the values must match the appointment record to proceed And allow a maximum of 3 attempts within 10 minutes And on 3 failed attempts, lock the token and display "Please see the front desk for a new code" And on success, continue to the Program Overview without re-prompting And log verification outcome and risk signal type without storing PII beyond session scope
Expired or Revoked Token Handling and Reissue Prompt
Given the QR token is expired (outside same-day window or TTL exceeded) or revoked by staff When the patient scans the QR Then do not display any program content And show a user-friendly message: "This code has expired. Please get a new code from the front desk." And present a short code entry field for a newly issued code And if a new valid short code is entered within 5 minutes, load the correct program as per platform (app or web) And if the short code is invalid or times out, remain on the message and allow retry And record an analytics event "token_invalid" with reason (expired|revoked|invalid)
QR Scan Fallback to Manual Short Code Entry
Given the patient cannot scan the QR (e.g., damaged code, camera unsupported, or OS does not recognize the link) When the patient navigates to the fallback short link and selects "Enter Code" Then the UI provides a 6-character short code input with auto-advance across fields And input is case-insensitive and accepts digits and uppercase letters only And on valid code submission, the correct appointment-bound program is loaded (app if installed, else web) And end-to-end resolution time from code submission to Program Overview is ≤ 5 seconds on 4G And after 5 invalid attempts within 5 minutes, rate-limit further attempts for 10 minutes with a clear message
Camera Permission Handling Does Not Block Auto-Load
Given the program autoloads successfully When the patient taps "Start" to begin exercises requiring computer vision Then show an in-app rationale followed by the platform-native camera permission prompt And on grant, transition to the camera session within 1.5 seconds And on deny, return to the Program Overview with non-blocking guidance on enabling camera later And never request camera permission prior to the patient tapping "Start" And all permission outcomes are logged with no repeated prompts during the same session
Verified App/Universal Links With Automatic Fallback
Given the domain association files (apple-app-site-association and assetlinks.json) are published and correct When testing on iOS (15–18) and Android (11–14) across major browsers Then Universal Links/App Links open MoveMate directly without a chooser ≥95% of attempts in the test matrix And if link verification fails, the flow automatically falls back to the responsive web experience And the association files include current bundle IDs and SHA-256 fingerprints and are covered by automated link-health checks at least daily And a regression test verifies deep link routing to the Program Overview for both platforms on each release
Misassignment & Duplicate Prevention
"As a clinic manager, I want safeguards that prevent the wrong patient or time slot from using a QR so that programs aren’t misassigned and staff don’t have to fix errors later."
Description

Add server‑side validation to ensure the scanned token matches the intended clinic, patient, and appointment window, blocking mismatches and duplicates. Implement concurrency locks to prevent the same token from being used across multiple devices simultaneously, and surface clear, actionable error messages. Provide a controlled override flow for authorized admins to reassign a token if the appointment details changed, with reason capture and audit logging. This reduces erroneous program launches and clinician cleanup work.

Acceptance Criteria
Server-Side Token Validation Blocks Misassignment
Given a scanned FastPass token with token_id, clinic_id, patient_id, and appointment_id And the token is signed by the MoveMate server When the server validates the scan Then the program launches only if all IDs match the stored tuple and the signature is valid And no program launches if any ID mismatches, the token is revoked, or the signature is invalid And the response includes an error_code in {CLINIC_MISMATCH, PATIENT_MISMATCH, APPT_MISMATCH, TOKEN_INVALID_SIGNATURE, TOKEN_REVOKED} on failure And an audit entry is recorded with token_id, mismatch_type, request_device_id, and timestamp on every failure
Same-Day Appointment Window Enforcement
Given an appointment with date D in the clinic's timezone When a token bound to that appointment is scanned on date D Then access is permitted When the same token is scanned on any date ≠ D Then access is denied with error_code = APPT_OUT_OF_WINDOW And no session or assignment is created on denial And an audit entry records token_id, attempted_date, clinic_timezone, and outcome
One-Time Redemption and Duplicate Scan Prevention
Given a token that has not been redeemed When the first program launch succeeds Then the token status is set to REDEEMED and linked to a session_id And subsequent scans of the same token return error_code = TOKEN_ALREADY_USED And no additional program launch, session, or assignment is created after redemption And all subsequent attempts are logged with token_id, previous_session_id, and timestamp
Cross-Device Concurrency Lock
Given a token that is not currently locked When Device A initiates validation Then the server acquires an atomic lock for token_id with TTL = 120 seconds And while the lock is held, any scan from Device B returns error_code = TOKEN_IN_USE and creates no session And when Device A ends the session or 120 seconds elapse without a heartbeat, the lock is released And lock acquisition and release events are audit-logged with token_id, device_id, and timestamps
Admin Override Reassignment with Reason and Audit
Given a token scan fails due to a tuple mismatch When an Admin with override permission and 2FA enabled initiates an override And provides a reason (minimum 10 characters) and selects the new clinic_id, patient_id, and appointment_id Then the token is rebound to the new tuple and the previous mismatch condition is resolved on the next scan And an immutable audit record captures admin_id, old_tuple, new_tuple, reason, timestamp, and originating_ip And the system records error_code = OVERRIDE_PERFORMED for the override event
Actionable Error Messaging and Structured Codes
Given any validation failure or lock condition When the server responds to the scanning client Then the response contains a human-readable message with cause and next step (e.g., regenerate FastPass or request admin override) And a machine-readable error_code mapped 1:1 to the failure reason is present And the message contains no PHI beyond first-name initial and appointment date And the client displays the message within 1 second and no program is launched on error
Multi‑Channel QR Delivery (Print, SMS, Email)
"As front‑desk staff, I want to print or send the QR in the patient’s preferred channel so that they can use it immediately regardless of device or connectivity constraints."
Description

Support multiple delivery paths for the generated QR: on‑screen display, thermal/standard printer output with high‑contrast codes, and digital delivery via SMS and email. Use existing communications providers with opt‑in consent, per‑region compliance settings, and rate limiting. Include branded templates, localization, and a shortened URL fallback that preserves token security. Show delivery status (sent, bounced, undelivered) to staff with simple retry actions, and store delivery metadata for troubleshooting.

Acceptance Criteria
On-Screen QR Display for Same-Day FastPass
- Given a staff user in Kiosk FastPass selects a same-day appointment When they click Generate QR Then the app renders a QR within 2 seconds, size ≥ 300x300 px, error correction level ≥ M, and passes built-in scan validation - And the QR encodes a secure token bound to the selected appointment and clinic, with expiry at clinic-local 23:59 on the appointment date - And a shortened HTTPS URL equivalent to the token is displayed with Copy action - And no patient PII appears in the QR payload or URL - And if token generation fails, an error banner is shown with a Retry control and no QR is displayed
High-Contrast Thermal and Standard Print Output
- Given a staff user has generated a QR for a same-day appointment When they choose Print Then the system offers profiles: Thermal 2" label and Standard A4/Letter - And the printout includes: QR code of physical size ≥ 32 mm per side, black on white with contrast ratio ≥ 4.5:1, error correction ≥ M, localized instructions, clinic branding - And the shortened URL is printed below the QR in 14 pt or larger - And the layout localizes language and date format to patient preference or clinic default - And if the selected printer is unavailable, the user is prompted to Save as PDF and the event is logged
SMS Delivery with Opt-In and Regional Compliance
- Given patient has SMS contact on file and has explicitly opted in, and regional settings permit SMS for this purpose When staff selects Send via SMS for a generated QR Then an SMS is sent via the configured provider using the localized branded template containing only the shortened HTTPS URL and non-PHI copy - And rate limits are enforced: ≤ 3 SMS per patient per 24 hours and ≤ 60 SMS per clinic per 5 minutes; attempts beyond limits are blocked with a clear message - And the message includes compliance-required footer (e.g., STOP/HELP) where mandated by region - And delivery status updates within 10 seconds of provider callback to one of: Sent, Delivered, Undelivered, Bounced - And metadata stored includes provider message ID, timestamps, channel, locale, template version, staff user ID, and status code - And if opt-in is missing or region disallows SMS, sending is blocked with a compliance error and no message is sent
Email Delivery with Templates and Localization
- Given patient has a verified email and has explicitly opted in, and regional settings permit email for this purpose When staff selects Send via Email for a generated QR Then an email is sent via the configured provider using the localized branded template with clinic branding, no PHI, and the shortened HTTPS URL prominently placed - And rate limits are enforced: ≤ 3 emails per patient per 24 hours and ≤ 300 emails per clinic per hour; attempts beyond limits are blocked with a clear message - And delivery status updates as: Sent, Delivered, Bounced, or Undelivered, based on provider webhooks, within 60 seconds - And metadata stored includes provider message ID, timestamps, channel, locale, template version, staff user ID, and SMTP/ESP status codes - And if opt-in is missing or region disallows email, sending is blocked with a compliance error and no email is sent
Secure Shortened URL Fallback for QR Tokens
- Given a QR token is generated When a shortened URL is created Then the URL is HTTPS, contains no PII, has path entropy ≥ 128 bits, and resolves to the same appointment-bound program as the QR - And the token/URL expires at clinic-local 23:59 on the appointment date or upon manual revoke, after which the link shows an expiration page without revealing PHI - And rate limiting on URL redemption prevents brute-force (e.g., ≥ 100 ms per attempt and IP-based throttling) - And all accesses are logged with minimal data (timestamp, coarse IP, user agent) for security monitoring - And the short domain supports localization where required (e.g., region-specific domains) without altering token security
Staff Delivery Status Panel with Retry Actions
- Given a staff user is viewing the appointment in Kiosk FastPass When they open Delivery Status Then per-channel entries display channel-appropriate states (On-Screen: Ready, Displayed, Expired; Print: Queued, Printed, Spool Failed; SMS/Email: Sent, Delivered, Bounced, Undelivered) with timestamp and channel icon - And failed channels (Bounced/Undelivered/Spool Failed) present a Retry action that re-sends via the same channel or lets the user select an alternate channel - And retries respect rate limits and record a new attempt with linkage to the prior attempt - And status updates reflect provider callbacks within 5 seconds; manual Refresh updates immediately - And only authorized staff roles can view delivery details; PII is minimized in the panel; a link to delivery metadata is available for troubleshooting
Audit Trails & Usage Analytics
"As a clinic owner, I want visibility into how often FastPass is used and where failures occur so that I can improve front‑desk throughput and patient adherence."
Description

Log end‑to‑end events for QR lifecycle (generation, delivery attempts, views, scans, validations, program loads, failures) with timestamps, staff IDs, and non‑sensitive device/network metadata. Expose an admin dashboard showing FastPass adoption, scan success rate, average check‑in time saved, and error breakdowns by reason. Support CSV export and secure API access for BI tools, apply retention policies, and trigger alerts on anomalous activity (e.g., repeated failures from one IP). Integrate with MoveMate’s existing analytics pipeline and ensure HIPAA‑aligned storage and access controls.

Acceptance Criteria
End-to-End QR Lifecycle Event Logging
Given a FastPass QR code is created for a same-day appointment When any lifecycle action occurs (generate, delivery_attempt, view, scan, validate, program_load, failure) Then an event is persisted with fields: event_id (UUID v4), event_type in {generate, delivery_attempt, view, scan, validate, program_load, failure}, qr_id, appointment_id, staff_user_id (nullable), timestamp_utc (ISO-8601, ms precision), device_metadata (device_model, os_version, app_version, network_type, truncated_ip_or_asn), correlation_id And PHI/PII fields (name, DOB, diagnosis, email/phone, full IP) are not stored in the event payload And 99.9% of lifecycle actions within a 30-day window have a corresponding event (measured by reconciliation job) And duplicate event writes are prevented (idempotency key = event_type+qr_id+timestamp_bucket) And p95 event write latency <= 200 ms and p99 <= 500 ms under 200 RPS And events are encrypted at rest (AES-256) and in transit (TLS 1.2+) And access is role-based (least privilege) and all reads/writes are audit-logged
Admin Dashboard KPIs for FastPass Adoption and Efficiency
Given an admin selects a date range (up to 90 days) and optional filters (clinic, location, staff) When the dashboard loads Then it displays: - FastPass adoption rate = unique appointments with at least one successful scan / eligible appointments, with numerator/denominator counts - Scan success rate = successful validations / total scans - Average check-in time saved (minutes) = baseline_manual_avg - observed_fastpass_avg, with baseline source shown - Error breakdown by reason (e.g., expired_qr, invalid_appointment, network_error, auth_failed) with counts and percent And charts/tables respect filters and provide drill-down to event samples (<= 50 rows) And metrics reflect data freshness <= 5 minutes (timestamp displayed) And p95 dashboard query time <= 2.0 s And all dashboard accesses are logged with admin_user_id and filter parameters And no PHI is displayed; only aggregate and pseudonymous identifiers
CSV Export of Events and Metrics
Given an admin applies filters and selects Export CSV for Events or Metrics When export is requested Then a CSV file is generated containing only the selected dataset and filters with columns documented in the data dictionary And timestamps are ISO-8601 UTC; delimiters and quotes are RFC 4180 compliant; UTF-8 BOM not included And exports are limited to 100,000 rows per file; larger results are chunked and zipped And generation completes within 60 seconds for 100k rows p95 And the download link is single-use, expires in 24 hours, and requires re-authentication And PHI/PII is excluded; truncated_ip_or_asn is included instead of full IP And an export_audit event is created with admin_user_id, filter hash, row_count, and timestamp
Secure Analytics API for BI Tools
Given a BI client with an approved OAuth2 client-credentials grant and scope analytics.read When it calls GET /v1/analytics/events and GET /v1/analytics/metrics with clinic tenancy headers and filters Then responses are paginated (cursor), rate-limited to 600 requests/min per client, and only return the caller’s tenant data And all endpoints require TLS 1.2+; tokens are JWT with RS256 and 60-minute expiry; refresh via token endpoint And unauthorized requests return 401; insufficient scope/tenant return 403; throttled requests return 429 with Retry-After And response schemas are versioned (v1) and backward-compatible; ETag headers support caching And availability SLO >= 99.9% monthly; p95 response time <= 500 ms for pages <= 5k records And all API access is audit-logged (client_id, tenant_id, route, status, bytes_sent) and PHI is excluded
Data Retention and Purge Compliance
Given retention policies are configured When nightly retention jobs run Then raw event data older than 13 months is purged; aggregated metrics older than 36 months are purged And purge operations cascade to replicas and object storage; associated backups are pruned in the next backup cycle (<= 7 days) And legal holds can be applied per tenant to pause purge, with audit trail And a purge report is generated daily with counts deleted by table/partition and stored 90 days And sampled verification confirms <= 0.5% variance between expected and deleted counts And purged data is non-recoverable through standard access paths; access attempts are denied and logged
Anomaly Detection and Alerting
Given live event ingestion is operating When any of the following occurs: - >= 5 failures with the same truncated_ip_or_asn or device_id within 10 minutes - Scan success rate drops below 92% over a 15-minute rolling window with > 100 scans - Average check-in time saved decreases by > 30% day-over-day Then an alert is sent within 2 minutes to the configured channels (email and Slack) with tenant, timeframe, metric, thresholds, and top 5 contributing reasons And alerts are de-duplicated per rule for 15 minutes; acknowledgements pause repeats for 60 minutes And all alerts create an alert event with resolution status and timestamp And runbooks are linked in the alert payload
Analytics Pipeline Integration and Data Quality
Given the existing MoveMate analytics pipeline is available When FastPass events are published Then events are emitted to the canonical topic/stream with schema version fastpass.events.v1 and correlation_id enabling join to appointment context And producer retries with exponential backoff up to 5 attempts; on failure, events are queued locally and drained within 10 minutes of recovery And end-to-end p95 latency from event to dashboard availability <= 120 seconds; data completeness within 30 minutes >= 99.5% And nightly reconciliation compares source-of-truth counters vs pipeline aggregates; variance <= 0.5% or alerts raised And schema changes follow backward-compatible evolution with contract tests and data lineage updated

Ephemeral SnapCodes

Time‑boxed, single‑use codes with instant revoke and audit trails. Prevents code reuse and misroutes when links are forwarded, protecting privacy while maintaining the under‑60‑second onboarding flow.

Requirements

Cryptographically Secure SnapCode Generation
"As a clinician, I want to generate secure, single-use access codes quickly so that my patients can onboard without risking unauthorized access."
Description

Provide a service that creates high-entropy, single-use SnapCodes using a cryptographically secure random source. Codes must be short enough for SMS/email and human entry if needed, avoid visually ambiguous characters, and be URL-safe. Store only hashed representations server-side. Include environment scoping (dev/stage/prod isolation), configurable code length, and metadata (issuer, delivery channel, TTL) for validation. Integrate with MoveMate’s clinician workflow to generate a code in under two clicks and attach it to a magic link for the patient.

Acceptance Criteria
CSPRNG High-Entropy SnapCode Generation
Given the SnapCode generator is invoked with default settings When a new code is requested Then the generator MUST use a cryptographically secure RNG provided by the host platform (verified by dependency and code scan) And the default alphabet and length MUST yield ≥ 40 bits of entropy And p95 generation latency MUST be ≤ 200 ms under 50 RPS in staging And across 1,000,000 generated codes in staging, duplicate rate MUST be 0 (statistically verified with collision test)
Human-Friendly, URL-Safe SnapCode Format
Rule: Codes MUST consist only of URL-safe, non-ambiguous characters (Crockford Base32 without I, L, O, U) Rule: Redemption MUST be case-insensitive; storage/validation normalizes to uppercase Rule: Default length is 8; configurable length MUST be enforceable between 6 and 12 characters Rule: Codes MUST contain no whitespace or characters requiring URL encoding Rule: The magic link with embedded code MUST remain ≤ 200 characters total (using default domain)
Single-Use Validation and Instant Revoke
Given a valid, unused code within TTL When the code is redeemed Then redemption MUST succeed once and atomically mark the code as consumed And any subsequent redemption attempts MUST fail with 410 Gone (AlreadyRedeemed) And a race test with 10 concurrent redemption attempts MUST result in exactly 1 success and 9 failures Given a valid, unused code When a clinician revokes it Then revocation MUST take effect globally within ≤ 1 second and further attempts MUST fail with 410 Gone (Revoked) And all outcomes (issued, redeemed, revoked, expired) MUST be recorded in the audit trail with timestamp and actor
Hashed Server-Side Storage with Metadata
Rule: Plaintext codes MUST never be persisted or logged; logs MUST redact code values Rule: Only a hashed representation (HMAC-SHA256 using a KMS-managed secret) MAY be stored; verification MUST use constant-time comparison Rule: Stored metadata MUST include issuer_user_id, delivery_channel (sms|email|qr), created_at, ttl_expires_at, environment, and status Rule: Access control MUST prevent any user from retrieving plaintext codes; only metadata is retrievable per authorization policy Rule: Rotating the HMAC secret MUST not expose historical plaintext codes; existing hashes remain verifiable via key ring until retired
Environment-Scoped Codes (dev/stage/prod Isolation)
Given a code generated in environment=staging When redemption is attempted in environment=production Then the attempt MUST fail with 404 Not Found (EnvironmentMismatch) and no side effects Rule: Each environment MUST use distinct secrets and storage namespaces to prevent cross-environment lookup Rule: Audit events MUST include the environment field for issuance and redemption
Configurable Code Length and Entropy Policy Enforcement
Given an admin sets code_length to N via configuration/API When N ∈ [6,12] Then the setting MUST be accepted and applied to subsequent generations When N is outside [6,12] Then the request MUST be rejected with 400 Bad Request and validation message Rule: Effective entropy (alphabet_size^N) MUST be ≥ 40 bits; configurations that fall below MUST be rejected
Clinician Workflow: Generate and Attach SnapCode in <2 Clicks
Given a clinician is on a patient profile or Invite Patient modal When the clinician initiates SnapCode generation Then the code and magic link MUST be produced and attached in ≤ 2 clicks total And p95 time from initial click to link copy-available MUST be ≤ 1.0 second in staging And the flow MUST be fully operable via keyboard only and include accessible labels for screen readers And the generated link MUST include the SnapCode and environment parameters
Time-boxed Expiration and Policy Controls
"As a clinic admin, I want to set how long SnapCodes stay valid so that we balance patient convenience with security requirements."
Description

Implement server-enforced TTLs for SnapCodes with default and clinic-level configurable durations. Expired codes must be rejected with clear UX, and countdown indicators should be shown on patient and clinician views. Support clock-skew tolerance, grace periods configurable by admin, and automatic cleanup of expired records. Expose policy settings in admin UI and via API, and surface warnings when policies may impact the under-60-second onboarding goal.

Acceptance Criteria
Default and Clinic-Level TTL Enforcement
Given the system default SnapCode TTL is 5 minutes And Clinic A has no custom TTL policy When a clinician at Clinic A generates a SnapCode at T0 Then the server sets expires_at = T0 + 5 minutes And any redemption after expires_at is rejected And Clinic B sets a custom TTL of 2 minutes When a clinician at Clinic B generates a SnapCode at T1 Then the server sets expires_at = T1 + 2 minutes
Expired Code Rejection and Clear UX
Given a SnapCode is expired (server time > expires_at + configured grace) When a patient attempts to redeem it Then the server responds with error_code = 'SNAPCODE_EXPIRED' And the patient UI shows 'Your SnapCode expired. Request a new code.' with a 'Request New Code' button And the clinician dashboard shows the invitation status = 'Expired' within 2 seconds of the failed attempt And an audit event 'snapcode.expired_redeem_attempt' is recorded with code_id, user_or_device_id, timestamp, and client_app_version
Countdown Indicators on Patient and Clinician Views
Given a SnapCode issued at T0 with TTL = 2 minutes When the patient opens the app before redemption Then a countdown is displayed in mm:ss starting at 2:00, updating every 1 second And when server time reaches expires_at Then the label switches to 'Expired' within 1 second and the redeem action is disabled And the clinician dashboard displays the same remaining time with discrepancy <= 2 seconds versus the patient view
Server Grace Period After Expiration
Given the grace period policy is set to 15 seconds And a SnapCode with expires_at = Texp When a redemption request arrives at server time Texp + 10 seconds Then the server accepts the redemption and marks redeemed_in_grace = true And when a redemption request arrives at server time Texp + 16 seconds Then the server rejects the redemption with error_code = 'SNAPCODE_EXPIRED'
Clock-Skew Tolerance in Time Displays
Given the clock-skew tolerance policy is set to 30 seconds And the patient device clock is fast by 25 seconds When the app displays the SnapCode countdown Then it uses server-synchronized time and shows remaining time accurate to ±2 seconds relative to server time And the patient is not blocked from redemption before server time reaches expires_at
Automatic Cleanup of Expired Records with Audit Retention
Given the expiration retention window is set to 24 hours And the cleanup job runs every 10 minutes And a SnapCode expired more than 24 hours ago and is not redeemed When the cleanup job executes Then the SnapCode record is deleted from the active table And its immutable audit log entries remain accessible for compliance And API/list views no longer return the deleted SnapCode, while audit APIs still return its events
Admin and API Policy Configuration with Warnings
Given an admin with permission 'Clinic Policy: Edit' opens the SnapCode policy settings When they set TTL to 45 seconds and attempt to save Then the UI shows a warning 'TTL below 60s may impact under‑60‑second onboarding' and requires acknowledgement to proceed When they acknowledge and save Then the new TTL, grace_seconds, and clock_skew_seconds are persisted and versioned, and an audit event 'policy.snapcode.updated' is recorded And when querying GET /api/v1/policies/snapcode Then the response includes ttl_seconds, grace_seconds, clock_skew_seconds, and warning flag risk_under_60_onboarding = true when ttl_seconds < 60 And a non-admin receives 403 Forbidden when attempting to update via API
Single-use Redemption with Race-safe Idempotency
"As a patient, I want my SnapCode link to work instantly and only once so that I can start my therapy without confusion or errors."
Description

Create a redemption endpoint that atomically validates TTL, channel binding, and unused state, then consumes the SnapCode exactly once. Handle concurrent redemption attempts with transactional guarantees and idempotency keys. On success, deep link the patient into MoveMate’s onboarding with pre-filled context; on failure (used/expired/invalid), show precise, localized messages and next steps. Log all outcomes and map the first successful redemption to the intended patient record.

Acceptance Criteria
Happy Path: Single-use Redemption and Deep Link
Given a valid, unexpired SnapCode bound to channel=SMS and intendedPatientId=P123 with unused state And a unique X-Idempotency-Key not seen in the last 24h When POST /v1/snapcodes/redeem is called with {code, channel=SMS, locale=es-ES} within the code TTL Then the service atomically validates TTL, channel binding, and unused state and marks the code consumed in a single transaction And returns HTTP 200 with a deep link that opens MoveMate onboarding and includes pre-filled context: clinicId, therapistId, intendedPatientId, exercisePlanId, locale=es-ES And the first successful redemption is mapped to patient P123 And an audit event is recorded with outcome=success, codeId, requestId, idempotencyKey, patientId, timestamp, channel, locale And the end-to-end response time is <= 400 ms at p95 under nominal load
Expired Code Handling
Given a SnapCode whose TTL has elapsed When POST /v1/snapcodes/redeem is called with that code Then the service returns HTTP 410 Gone And the response body includes i18n key error.codeExpired with localized text for the requested locale or en-US fallback And includes a nextStep of type requestNewCode And the code remains unused and unmapped to any patient And an audit event is recorded with outcome=expired and reason=ttl_expired
Already-Used Code Handling
Given a SnapCode already marked as consumed When a redemption request is received for that code Then the service returns HTTP 409 Conflict And the response includes i18n key error.codeAlreadyUsed and nextStep contactClinic And the consumed state and existing patient mapping remain unchanged And an audit event is recorded with outcome=already_used and reason=consumed_at_exists
Concurrent Redemption Race Safety
Given two or more redemption requests for the same SnapCode arrive within 100 ms When the system processes these requests Then exactly one request commits the consume action and returns HTTP 200 with the deep link And all other requests deterministically return HTTP 409 Conflict with error.codeAlreadyUsed And there is exactly one audit event with outcome=success and N-1 audit events with outcome=already_used_race And the patient mapping exists exactly once and references the intended patient And no duplicate deep links or duplicate side effects are produced
Idempotency Key Semantics
Given a redemption request with X-Idempotency-Key=K and payload P When the same request is retried within 24h with the same key and identical payload Then the service returns the original status and body with header Idempotency-Cache: hit and performs no additional side effects And audit records exactly one side-effect success tied to K, with any subsequent attempts marked idempotent_replay When the same key K is reused with a different payload Then the service returns HTTP 422 Unprocessable Entity with error.idempotencyKeyMismatch and performs no side effects
Channel Binding Enforcement
Given a SnapCode bound to channel=Email for address A When redemption is attempted via channel=SMS or via Email for a different address B Then the service returns HTTP 403 Forbidden with error.channelBindingFailed (localized) and does not consume the code And an audit event is recorded with outcome=channel_mismatch including attemptedChannel and attemptedAddress (redacted/hashed) When redemption is attempted via the correct bound channel/address Then processing proceeds per the happy path criteria
Audit Trail Completeness and Privacy
Given any redemption attempt (success or failure) When the request completes Then an immutable audit entry is stored within 200 ms containing: codeId, outcome, reason, requestId, idempotencyKey, channel, locale, timestamp (UTC ISO-8601), clientIpHash, userAgent, patientId (on success), actor=patient And audit entries redact PII (no raw phone/email), are write-once, and retained for 365 days And authorized admins can retrieve audit entries by codeId or requestId and filter by outcome with p95 query latency <= 1 s
Instant Revoke Controls (API and Clinician UI)
"As a clinician, I want to revoke a SnapCode immediately if it was sent to the wrong person so that I prevent unauthorized access to patient data."
Description

Provide a clinician-facing control and secure API to revoke any active SnapCode immediately. Revocation must propagate in real time across caches and edge nodes, invalidating links within seconds. Display revocation status, timestamp, and issuer, and require a reason code for auditability. Support bulk revoke for a patient or campaign and permission checks aligned with clinic roles.

Acceptance Criteria
Single SnapCode Instant Revoke via Clinician UI
Given a clinician with revoke_snapcode permission views an active SnapCode in the patient record, When they click Revoke, select a mandatory reason code from the configured list, and confirm, Then the SnapCode status updates to Revoked in the UI within 1 second, shows revoked_at (UTC), issuer (name and ID), and reason_code, and the action cannot be undone. Given the same SnapCode link is opened on any device or browser, When accessed after revocation, Then onboarding is blocked with a revoked state and the API returns HTTP 410 Gone with error_code=SNAPCODE_REVOKED. Given concurrent revoke attempts against the same SnapCode, When multiple requests are submitted within 1 second, Then the operation is idempotent with a single final state=revoked and a single audit entry (duplicates deduplicated).
Bulk Revoke for Patient or Campaign
Given a user with bulk_revoke permission selects a patient and chooses Bulk Revoke, When they confirm with a mandatory reason code, Then all active SnapCodes for that patient are revoked and the UI shows success_count, failure_count, and a downloadable CSV of failures with error reasons. Given a user selects a campaign and chooses Bulk Revoke, When executed, Then 95% of up to 1,000 active codes are revoked within 30 seconds and 99% within 60 seconds, retries are applied for transient failures, and expired codes remain unchanged. Given a bulk revoke job is running, When the user navigates away and returns, Then job progress (percent, counts) and final results are persisted and visible, and each affected code’s audit record includes job_id and scope (patient or campaign).
API Revoke Endpoint Security and Idempotency
Given an authorized client with scope revoke:snapcode calls POST /v1/snapcodes/{id}/revoke with JSON body {reason_code}, When the SnapCode is active, Then the API returns 200 with {status:"revoked", revoked_at, issuer, reason_code, request_id} and subsequent GET shows status=revoked. Given the same revoke call is retried, When the idempotency-key header is reused or the code is already revoked, Then the API returns 200 with idempotent=true and no duplicate audit entries are created. Given missing reason_code or malformed payload, Then the API returns 400 with validation details; given insufficient permission, Then 403; given code not found in tenant, Then 404; given expired code, Then 200 with {status:"expired"} and an audit record type=revocation_attempt without state change.
Real-Time Revocation Propagation Across Edge and Caches
Given a SnapCode is revoked, When probed from at least 10 geographically distributed edge locations, Then the 95th percentile time to rejection is ≤ 5 seconds and the 99th percentile is ≤ 10 seconds after the revoke command, with zero successful redemptions after rejection starts. Given CDN or application caches, When revocation occurs, Then related cache entries are purged or bypassed and no stale acceptance occurs beyond 10 seconds. Given an offline client has preloaded a SnapCode screen, When it regains connectivity, Then it revalidates on resume and blocks within 1 second if the code is revoked.
Revocation Status and Metadata Display in Clinician UI
Given a clinician views the SnapCodes list, When a code is revoked, Then the list shows a Revoked badge, revoked_at (clinic timezone), issuer, and reason_code; filter and sort by status and revoked_at are available and correct. Given the code detail panel is open during revocation, When the revoke completes, Then status and metadata update in real time without a full page refresh. Given the user exports the codes table, Then the CSV contains code_id, status, revoked_at (UTC), issuer_id, issuer_name, reason_code, and scope (patient/campaign) with values matching the UI.
Audit Trail Completeness and Integrity
Given any revoke performed via UI or API, When the action completes, Then an immutable audit record is written with fields: tenant_id, patient_id (if applicable), campaign_id (if applicable), code_id, previous_status, new_status, issuer_id, issuer_role, source (ui/api), reason_code, revoked_at (UTC), request_id, actor_ip, and idempotency_key (if provided). Given an admin queries the audit trail with filters (date range, issuer, patient, campaign, reason_code), Then results for up to 10,000 records return within 2 seconds and can be exported to CSV. Given a retention policy of at least 7 years, When audit records are stored, Then they are tamper-evident (e.g., hash-chained or WORM) and read operations indicate verification_status=true.
Role-Based Permission Checks and Tenant Isolation
Given a user without revoke_snapcode permission attempts to revoke via UI or API, Then the action is blocked (UI controls disabled or error shown) and API returns 403 with error_code=INSUFFICIENT_PERMISSIONS, and a denied audit entry is recorded. Given a user from Clinic A attempts to revoke a SnapCode belonging to Clinic B, Then the API returns 404 and the UI does not expose the code, preventing information leakage across tenants. Given a user’s role changes, When permissions are updated, Then revoke controls appear/disappear within 60 seconds and enforcement applies immediately on both UI and API.
Audit Trail and Compliance Logging
"As a compliance officer, I want a complete audit trail of SnapCode activity so that we can satisfy regulatory reviews and investigate incidents quickly."
Description

Record immutable events for SnapCode lifecycle actions: generation, delivery channel, redemption attempts (success/failure reasons), revocations, expirations, and policy changes. Include actor, patient context linkage, timestamps, and request metadata. Store logs in append-only storage with retention controls, provide search and export (CSV/JSON), and integrate with alerting for anomalous activity spikes. Ensure logs avoid storing the raw code or PHI beyond necessary references.

Acceptance Criteria
Lifecycle Event Logging Completeness
- On SnapCode generation, an event with fields {event_type=code_generated, snapcode_ref, patient_ref, actor_id, actor_type, timestamp(ISO-8601 UTC), request_id, source_ip, user_agent} is appended. - On SnapCode delivery (SMS/email/QR), an event with fields {event_type=code_delivered, channel, provider_message_id, snapcode_ref, patient_ref, actor_id, actor_type, timestamp, request_id} is appended. - On redemption attempt, an event with fields {event_type=code_redeem_attempt, outcome=success|failure, failure_reason enum if failure, snapcode_ref, patient_ref(if known), actor_id, actor_type, timestamp, request_id, source_ip, user_agent} is appended. - On revocation, an event with fields {event_type=code_revoked, reason enum, snapcode_ref, patient_ref, actor_id, actor_type, timestamp, request_id} is appended. - On expiration, an event with fields {event_type=code_expired, snapcode_ref, patient_ref, ttl_seconds, timestamp} is appended. - On policy change (e.g., retention_days, code_ttl, alert_thresholds), an event with fields {event_type=policy_changed, setting_name, old_value, new_value, actor_id, actor_type, timestamp, request_id} is appended. - Every event has a unique event_id and becomes queryable within 5 seconds of the triggering action; writes are durable before API success is returned.
Immutable Append-Only Storage and Integrity
- Log datastore enforces append-only semantics: create is allowed; update and delete operations are rejected prior to retention expiry. - Any attempt to mutate or delete a log record returns 403 and creates {event_type=log_mutation_blocked, target_event_id, actor_id, actor_type, timestamp, reason}. - Each log partition maintains a monotonically increasing sequence_number; sequence gaps are prevented and monitored. - Each event stores content_hash and previous_hash (per partition); periodic integrity verification reports zero hash-chain breaks in healthy state.
Sensitive Data Minimization in Logs
- Raw SnapCode values are never stored; only snapcode_ref (opaque UUID/token_id) is present in events. - No PHI is stored beyond necessary references: only patient_ref (opaque ID) is logged; fields such as name, DOB, diagnosis, exercise details, and message bodies are excluded. - Delivery metadata excludes full contact details; channel and provider_message_id are logged, but phone numbers/emails are not persisted. - Event ingestion validates against an allowlist schema; events containing prohibited fields are rejected with reason=field_not_permitted and are not written. - Corpus scan using defined regexes (SnapCode format and PHI markers) over a representative sample returns zero matches in stored events.
Log Search and Filtering
- API supports filters: date_range, event_type, patient_ref, snapcode_ref, actor_id, actor_type, outcome, failure_reason, delivery_channel, request_id. - Results support pagination (limit up to 1000, cursor/next_token) and sorting by timestamp asc|desc. - 95th percentile query latency is <= 2 seconds for up to 1,000,000 events within the filtered scope. - Each returned record includes {event_id, timestamp, event_type, actor_id, actor_type, patient_ref?, snapcode_ref?, request_id, metadata fields per event type} and adheres to the documented schema.
Export to CSV and JSON
- Users can export any search result set to CSV and JSON; exported record count exactly equals matched results for the same filter. - For exports > 100,000 records, the system runs an asynchronous job and provides a pre-signed download URL that expires within 24 hours. - CSV includes a header row with stable column names; JSON export is NDJSON with one event per line. - Export bundles include a SHA-256 checksum and a metadata file containing {filters, generated_at, record_count, schema_version}. - Exports exclude raw SnapCodes and PHI; fields match the same allowlist used by search.
Anomalous Activity Spike Alerting
- An alert is generated when redemption failures for a tenant exceed both 30 events in a 5-minute window and 3x the tenant’s 7-day moving average. - An alert is generated when SnapCode generations exceed either 200 in 10 minutes or 5x the tenant’s 7-day moving average, whichever is higher. - An alert is generated when revocations exceed 10 in a 10-minute window. - Alerts are delivered to configured channels (email and webhook) within 60 seconds and include {tenant_id, metric, window, threshold, observed_count, top_failure_reasons, sample_request_ids}. - Alert deduplication ensures at most one alert per metric per tenant per 10-minute window; all emitted alerts are logged as {event_type=alert_emitted}.
Retention Policy and Purge Enforcement
- Retention period is configurable per tenant within 90–1095 days; changes are enforced and logged via {event_type=policy_changed} with old_value and new_value. - A daily purge process deletes events older than the configured retention; a {event_type=purge_summary} is recorded with {tenant_id, from_ts, to_ts, purged_count}. - Post-purge, searches and exports return zero events older than the effective retention window. - Attempts to purge events newer than the retention window are blocked and logged as {event_type=purge_blocked} with reason=younger_than_retention. - Retention changes take effect within 24 hours of update and are reflected in purge behavior.
Channel-bound Delivery and Misroute Protection
"As a clinician, I want SnapCodes to only work for the intended recipient so that forwarded links don’t grant access to the wrong person."
Description

Bind SnapCodes to the intended delivery channel (e.g., specific email or phone) by storing a hashed identifier and validating it at redemption. If a code is opened from a different channel or suspicious context, require lightweight verification or block with clinician override. Detect likely forwards via device and network signals, surface clear guidance to the user, and provide safe recovery paths that don’t expose patient data.

Acceptance Criteria
Redeeming SnapCode from bound channel succeeds
Given a SnapCode is bound to a specific delivery channel via a stored hashed identifier and the code is unexpired and unused And the user initiates redemption from the same channel whose identifier matches the stored hash When the redemption request is submitted Then the system validates the channel match and approves redemption And the code is marked used with a timestamp And an audit entry is written with outcome "Channel Match" including device fingerprint, network IP, channel type, and code ID And onboarding access is granted without additional verification And P95 time from link open to access granted is <= 60 seconds
Mismatched channel triggers lightweight verification
Given a SnapCode is bound to delivery channel A via a stored hashed identifier And a redemption attempt originates from a channel whose identifier does not match the stored hash When the system detects the mismatch Then the user is prompted to complete a lightweight verification by entering a one-time code sent to channel A And upon successful OTP verification within 5 minutes and <= 3 attempts, access is granted and the code is marked used And upon failed verification (3 failed attempts or timeout), the redemption is blocked and the code remains unused And user-facing messages remain generic and do not reveal the intended recipient’s identity or channel details And all outcomes (challenge issued, success, failure) are recorded in the audit trail with timestamps and reason "Channel Mismatch"
Suspicious forward/context detection prompts verification
Given a SnapCode is opened from a context exhibiting suspicious signals (e.g., geolocation country differs from delivery country, device fingerprint change within 10 minutes of send, >2 distinct IPs attempt redemption within 5 minutes, or user-agent class change) When any configured suspicion rule is met Then the system requires lightweight verification (OTP to the bound channel) before granting access And if verification succeeds within 5 minutes and <= 3 attempts, access is granted and the code is marked used And if verification fails or times out, access is blocked and the code remains unused And an audit entry is recorded with the triggered signals and outcome tagged "Suspicious Context" And P95 added time for verified users due to this check is <= 30 seconds
Clinician override for blocked redemption
Given a redemption is blocked due to channel mismatch or suspicious context And a clinician with appropriate permissions is authenticated in the dashboard When the clinician initiates an override Then the system requires re-authentication and a justification note (min 15 characters) And the system revokes the original code and issues a new, single-use bypass token bound to the original channel And the bypass token expires in <= 30 minutes if unused And an audit entry records clinician ID, reason, patient context, old/new token IDs, and timestamps And no PHI is displayed to the end user during the override flow
Safe recovery path without PHI exposure
Given a user cannot access the bound channel to complete verification When the user selects "Need help" on the verification screen Then the system presents recovery options that do not reveal patient name, clinician name, or channel details And the system allows the user to request clinic assistance, generating a ticket with code ID and anonymized context only And if a pre-verified alternate channel exists, the system may offer OTP to that channel without displaying the channel value; selection reveals no PII And no UI or API response indicates whether a specific email/phone is on file And all recovery actions are logged in the audit trail with outcome "Recovery Initiated" or "Recovery Completed"
Instant revoke invalidates codes immediately
Given a clinician clicks "Revoke" on an active SnapCode When the revoke action is confirmed Then the code becomes invalid for redemption within 2 seconds at P95 And any subsequent redemption attempt returns a generic "Code invalid or expired" message without PHI And an audit entry records the revoke action with clinician ID, timestamp, and reason And revocation does not affect already-completed redemptions And revocation status propagates to all edge caches within 10 seconds
Hashed channel identifiers are non-reversible and not logged
Given a SnapCode is created for a recipient channel When storing the channel binding Then the system stores only an HMAC-SHA256 of the canonicalized channel identifier using a tenant-scoped secret key And raw identifiers are never persisted or written to logs And unit/integration tests verify that redaction occurs for all log events containing channel fields And secret keys are rotated at least every 90 days with zero downtime and existing hashes remain verifiable via key versioning And attempts to retrieve raw identifiers via API return redacted values
Seamless 60-second Onboarding UX with Fallbacks
"As a patient, I want a fast and simple onboarding flow from the SnapCode so that I can start my prescribed exercises without delay."
Description

Design a streamlined path from SnapCode link to account verification and first exercise within 60 seconds on modern devices. Support universal links/app links, deep linking into native apps, and a responsive web fallback. Provide clear timers and error states for expired/used codes, one-tap regenerate/resend requests back to the clinician, and accessibility/localization coverage. Minimize data entry, pre-fill known fields, and instrument analytics to monitor completion times.

Acceptance Criteria
Universal/App Link Deep Link to Native App
Given the device has MoveMate installed and the user taps a valid, unexpired SnapCode universal/app link from SMS or email When the OS resolves the link Then the MoveMate app opens to the Verify screen via deep link within 2 seconds of app foregrounding And the SnapCode is pre-applied with no manual entry required And the code value and any PII are not exposed in visible URLs, intermediate web pages, system toasts, or client logs And an onboarding_start analytics event with fields {link_source, os, app_version, deep_link=true} is emitted within 2 seconds of foregrounding
Responsive Web Fallback Onboarding
Given the device does not have MoveMate installed or the user chooses to continue in browser When a valid, unexpired SnapCode link is opened Then a responsive web Verify screen loads first interactive paint within 2 seconds on reference Wi‑Fi (>=50 Mbps, <100 ms RTT) devices (iPhone 13 iOS 16+, Pixel 6 Android 12+) And the user can complete verification and start the first prescribed exercise within 60 seconds end‑to‑end (tap link to exercise started) And returning patients require 0 typing; new patients require ≤10 keystrokes and ≤2 screens to proceed And known fields (name, clinic, contact) are prefilled when available and editable
Expired/Used SnapCode Handling with One‑Tap Regenerate
Given a SnapCode is expired or already used When the link is opened in app or web Then the user sees a clear state with reason {expired|already used} and the remaining/elapsed time displayed as mm:ss (localized) And the code is not accepted and cannot proceed to verification And a one‑tap “Request new code” action is available, shows immediate visual feedback, and returns success/failure within 3 seconds And regenerate requests are rate‑limited to ≤3 per hour per patient and create an auditable event {type: regenerate_requested, reason, channel} And no PHI about the intended patient is shown beyond the recipient’s own contact channel
Single‑Use, Time‑Boxed Code Enforcement and Instant Revoke
Given a valid, unexpired, unused SnapCode When it is redeemed during verification Then the system atomically marks the SnapCode as consumed before granting access, ensuring single‑use semantics And any subsequent attempt to use the same SnapCode returns a 410/invalid state with a safe, generic message And if a clinician revokes the SnapCode, revocation propagates globally within 5 seconds and further uses are blocked And all state transitions (issued, redeemed, expired, revoked, regenerate_requested) are written to an immutable audit log with timestamp, actor, hashed_patient_id, and reason
Forwarded Link/Misroute Protection
Given a SnapCode is assigned to Patient A with bound identifiers (e.g., phone/email hash) When the link is opened by a user whose identifiers do not match the bound identifiers Then the flow is halted before any patient data is shown And the user sees a neutral “This link isn’t for this account” message with no details about Patient A And a one‑tap “Send me my code” action is offered that notifies the clinician or sends a fresh code to the requester’s verified channel (rate‑limited) And an auditable misroute_detected event is recorded with mismatch type and no PHI leakage
Analytics Instrumentation and 60‑Second Completion Monitoring
Given onboarding events are instrumented When a user taps a SnapCode link Then timestamps are captured for {link_open, verify_success, first_exercise_started} and sent to analytics within 5 seconds of first_exercise_started And the completion time (first_exercise_started − link_open) is computed and available in dashboards segmented by platform and link_source And on reference devices and network, the 95th percentile completion time is ≤60 seconds during test runs And drop‑off events include standardized reasons {install_block, expired_code, misroute, network_timeout, user_cancel}
Accessibility and Localization Coverage
Given onboarding flows in app and web When tested with VoiceOver (iOS) and TalkBack (Android), screen readers correctly announce control labels, countdown timers, and error states (role=alert) And focus order follows a logical sequence with no traps; all actionable elements have ≥44×44 dp touch target And text and key UI elements meet WCAG 2.1 AA contrast (≥4.5:1), support Dynamic Type/Font Scaling, and maintain layout without overlap And all strings (including timers, error messages) are localized for en, es, fr with locale‑appropriate time formats; fallback is English if missing And no onboarding step requires vision or hearing alone; alternatives (visible text with SR labels and haptics where applicable) are provided

Move‑Ready Check

A 20‑second camera and environment setup right after program load. Verifies permissions, lighting, and framing with a quick test rep and micro‑cues, ensuring rep counting works on the first try and reducing support pings.

Requirements

Permission Gate & Recovery Flow
"As a first-time patient user, I want MoveMate to check and guide me to enable camera and motion permissions so that my reps can be detected without confusion."
Description

Checks and acquires camera and motion permissions at program load, surfaces OS-specific prompts, and provides guided recovery if access is blocked. Detects common denial states (permanently denied, restricted by MDM, no hardware) and offers clear next steps, including in-app retries and deep links to system settings. Ensures the flow completes within the 20-second setup window, logs non-PII outcomes for analytics, and gracefully falls back with messaging if permissions cannot be obtained.

Acceptance Criteria
iOS first-time camera permission grant within 20 seconds
Given an iOS device with Camera permission = Not Determined at program load When Move‑Ready Check starts Then an in-app explainer is shown and the iOS camera system prompt appears within 2 seconds of load And if the user taps Allow, the app confirms camera access and advances to the test-rep step with total elapsed time from program load <= 20 seconds And if the user taps Don’t Allow, the app immediately displays in-app guidance with an Open Settings deep link and does not proceed to test-rep And the outcome is logged as outcome_code="camera_granted_ios" or "camera_denied_ios" with time_to_result_ms, os_version, and app_version; no PII is logged
Android permanently denied camera permission recovery
Given an Android device where the camera permission is permanently denied ("Don't ask again") When Move‑Ready Check starts Then the app detects permanent denial within 500 ms and displays a recovery screen with an Open Settings deep link to App details and step-by-step guidance And tapping Open Settings launches the system App Info screen And upon returning to the app, the state is rechecked automatically within 1 second; if permission is granted, proceed to test-rep, else keep recovery screen visible And the recovery screen appears within 20 seconds of program load and the outcome is logged with outcome_code="camera_denied_permanent_android"; no PII is logged
MDM-restricted camera access detection and guidance
Given a device where camera access is restricted by MDM or system policy for the app When Move‑Ready Check starts Then the app detects the restricted state via OS APIs without presenting a system permission prompt And shows guidance indicating organization policy restriction with actions: Copy Support Info and Dismiss, and no Open Settings link And the fallback path prevents entering the test-rep step and is shown within 20 seconds of program load And outcome_code="camera_restricted_mdm" with policy_detected=true is logged; no PII is logged
No camera hardware detected fallback messaging
Given a device with no usable camera hardware (none present or not enumerated) When Move‑Ready Check starts Then the app detects lack of camera capability within 1 second and displays a clear “No camera detected” message with a Continue Without Camera option And selecting Continue Without Camera skips the counting feature and marks camera-dependent checks as unavailable while allowing program navigation And the fallback message is shown within 20 seconds of program load and outcome_code="no_camera_hardware" is logged; no PII is logged
Motion permission acquisition (iOS Motion & Fitness / Android Activity Recognition) within 20 seconds
Given a device where motion/activity recognition permission is Not Determined at program load When Move‑Ready Check runs after camera permission is granted Then the app triggers the OS-specific motion permission prompt (iOS Motion & Fitness / Android Activity Recognition) within 1 second of entering the motion step And if the user grants permission, the check completes and proceeds to test-rep with total elapsed time from program load <= 20 seconds And if the user denies, the app shows recovery with an Open Settings deep link (Android) or a Settings deep link (iOS), does not proceed to test-rep, and logs outcome_code="motion_denied" And no overlapping system prompts occur; prompts are strictly sequential
In-app retry after enabling permissions via system settings
Given the user initially denied camera and/or motion permission and then enables it in system settings When the user returns to the app foreground Then the app automatically re-detects the updated permission state within 1 second without requiring app relaunch And the Retry button remains available and triggers a recheck if auto-detection fails; tapping Retry updates the state within 1 second And on success, the flow advances to test-rep immediately; on failure, recovery messaging persists And outcome_code reflects the latest state and includes previous_outcome_code for correlation; no PII is logged
Analytics logging and timeout behavior for permission outcomes
Given any permission check outcome (granted, denied_once, denied_permanent, restricted_mdm, no_hardware, timeout) When the outcome is determined Then a non-PII analytics event is queued within 500 ms containing fields: outcome_code, permission_type, platform, time_to_result_ms, within_20s=true/false And if online, the event is sent within 2 seconds; if offline, it is stored and retried within 24 hours And no device identifiers, user names, or camera frames are logged; logs pass static schema validation And a timeout outcome is emitted if neither success nor a fallback screen is shown within 20 seconds of program load
Lighting Assessment with Actionable Feedback
"As a patient setting up at home, I want the app to tell me if my lighting is good enough and how to fix it so that rep counting is accurate."
Description

Analyzes preview frames in real time to evaluate luminance, contrast, and backlighting, presenting a simple green/amber/red indicator with one-line, plain-language fixes (e.g., face the window, avoid strong backlight). Applies skin-tone–inclusive exposure heuristics, avoids storing frames, and completes assessment and guidance within the 20-second window. Emits pass/fail with reason codes to improve first-try success without impacting privacy.

Acceptance Criteria
Ambient Luminance Threshold Check
Given camera preview is active at >=15 fps for >=2 seconds And a face or upper-body ROI is detected When the median luma Y' of the ROI is >=0.35 and <=0.70 And <=1% of pixels are clipped at black (<=0.02) or white (>=0.98) Then the lighting indicator displays green within 1 second And when ROI median is >=0.25 and <0.35, or >0.70 and <=0.80 Then the lighting indicator displays amber within 1 second And when ROI median is <0.25 or >0.80 Then the lighting indicator displays red within 1 second
Backlighting Detection
Given a subject ROI and a background region behind the subject are computed When (background median luma / ROI median luma) is >=1.75 and <2.25 Then emit reason code BACKLIGHT_STRONG with severity amber When the ratio is >=2.25 Then emit reason code BACKLIGHT_SEVERE with severity red When the ratio is <1.75 Then do not emit a backlight-related reason code
Contrast and Subject Separation
Given a subject ROI and immediate background region are computed When absolute difference in median luma between ROI and background is <0.05 Then emit reason code CONTRAST_LOW with severity red When the difference is >=0.05 and <0.08 Then emit reason code CONTRAST_BORDERLINE with severity amber When the difference is >=0.08 Then do not emit a contrast-related reason code
Skin-Tone–Inclusive Exposure Heuristics Parity
Given a validation set of >=30 videos per Fitzpatrick group I–II, III–IV, V–VI under identical in-range lighting When the heuristics evaluate the set Then the green classification rate per group is >=90% And the maximum difference in green rate between any two groups is <=5 percentage points And the false-red rate per group is <=5% And an automated unit test confirms exposure metering weights detected skin pixels >=60% in computing the ROI exposure target
Indicator, Fix Text, and Outcome Emission
Given one or more lighting reason codes may be emitted When multiple reason codes are present Then the final indicator color equals the highest-severity reason (red > amber > green) And pass/fail is mapped as green=Pass, amber=Fail, red=Fail And a one-line fix is displayed for the highest-severity reason And the fix text length is <=90 characters, contains no jargon, and begins with an actionable verb And the green state displays "Lighting looks good" And an event is emitted with fields {color, passFail, reasonCode, message, timestamp} and no pixel data or frame identifiers
Privacy: No Frame Storage or Transmission
Given the assessment runs for up to 20 seconds When it executes on-device Then no video frames or images are written to persistent storage And no pixel data or thumbnails are sent over the network And analytics/events exclude pixel payloads and include only non-PII reason codes and timings And in-memory frame buffers are released within 1 second after assessment completion And a privacy unit test verifies zero reads/writes to media storage APIs during the assessment
Timing: 20-Second Completion and First Message Latency
Given the Move‑Ready Check starts When the lighting assessment begins Then the first indicator color and fix (if needed) appear within 3 seconds on P50 devices and 5 seconds on P95 devices And the final pass/fail with reason code is emitted no later than T+20 seconds on P95 devices And the indicator updates in under 500 ms after a lighting change of >=10% luma is detected
Framing & Distance Guidance Overlay
"As a patient, I want clear visual guidance to position myself and my phone so that my whole movement is captured."
Description

Displays a live on-screen silhouette and bounding box to guide user positioning, verifying that required joints or full body are visible based on the assigned exercise. Estimates distance and camera angle, recommending landscape/portrait orientation, step-back cues, and device propping suggestions. Enforces a pass criterion before session start while accommodating small spaces, with responsive feedback that updates as the user moves.

Acceptance Criteria
Overlay Initialization and Exercise Mapping
Given an assigned exercise with defined required joints and framing targets When Move-Ready Check loads Then render the live silhouette and bounding box within 500 ms over the camera feed And display markers for the configured required joints only And lock the overlay visible until the Ready state is achieved And load exercise-specific guidance text for the active exercise
Responsive Feedback Latency
Given the camera preview is active When the user changes position by ≥10 cm or rotates yaw by ≥10° Then update on-screen guidance indicators and text within 200 ms at the 95th percentile and 350 ms max And maintain an average UI frame rate ≥24 fps during the check
Distance and Orientation Guidance
Given the exercise framing metadata defines a target body area range and preferred orientation When the detected person bounding box area is outside the target range Then show "Step back" if area > upper bound and "Step forward" if area < lower bound with directional arrows And clear the cue after the user remains within range for ≥2 s When the device orientation mismatches the exercise preference by >20° Then show "Rotate device to [portrait|landscape]" prompt And clear the prompt after the device orientation is within ±10° for ≥1 s
Angle and Stability Correction
Given device pitch or roll exceeds ±15° When sustained for ≥1 s Then show "Level your device" with propping suggestion And clear after pitch and roll are within ±10° for ≥2 s Given camera shake RMS >2 px for ≥1 s Then show "Stabilize device" cue And clear after shake falls below threshold for ≥2 s
Small-Space Accommodation
Given the step-back cue persists for ≥5 s and the person bounding box touches any two opposing screen edges in ≥80% of frames over that interval When this condition is met Then engage Small-Space Mode And reduce the lower bound of the target area by 20% for the exercise And allow partial-body framing provided all critical joints are visible And display "Alternate framing accepted" And exit Small-Space Mode after the user meets standard targets for ≥2 s
Required Joint Visibility Pass Criteria
Given the exercise lists required joints When evaluating the last rolling 2 s of frames Then each required joint is detected in ≥90% of frames with confidence ≥0.6 And each required joint remains ≥5% inside screen margins And no more than 1 consecutive frame per joint is missing Else display targeted micro-cues naming the missing joint and suggested adjustment
Ready Gate and Start Control
Given distance/orientation (or Small-Space Mode) is satisfied, device angle/stability is within tolerance, and required joint visibility passes When all conditions are simultaneously true for ≥3 s Then display a green Ready indicator and enable the Start button When any condition fails before Start Then disable Start within 100 ms and show the relevant cue When Start is pressed while Ready Then begin the exercise session within 300 ms
Test Rep Calibration & Validation Gate
"As a patient about to start exercises, I want to do one test rep that confirms tracking works so that I don’t waste time on uncounted reps."
Description

Prompts the user to perform a single sample rep that is processed through the production rep-counting pipeline to verify model confidence and tracking quality. On success, marks the session calibrated and proceeds; on failure, presents targeted micro-cues (e.g., adjust angle, increase light) and allows a limited number of quick retries, all within ~20 seconds. Caches per-exercise calibration hints on-device to speed future setups and blocks session start if minimum confidence is not met.

Acceptance Criteria
First‑Attempt Calibration Success
Given Move‑Ready Check is initiated for a selected exercise with camera permission granted When the user performs one sample rep within 10 seconds of the prompt Then the rep‑counting pipeline returns model confidence >= 0.85, pose‑tracking continuity >= 80% of rep duration, and required keypoint visibility >= 75% of frames And the session state for that exercise is marked Calibrated And the user is advanced to the exercise session in <= 500 ms without additional prompts And an audit event "calibration_success" with metrics (confidence, continuity, visibility, attempt=1, duration_ms) is logged
Targeted Micro‑Cues and Quick Retries on Low Confidence
Given the sample rep is processed and any metric is below threshold When a failure reason is detected (insufficient lighting, incorrect framing, excessive camera angle, occlusion, motion outside ROI) Then a targeted micro‑cue (text + icon) specific to the top failure reason is displayed within 500 ms And the user can retry immediately without navigation or reload And up to 2 retries are allowed, for a total of 3 attempts And each retry re‑evaluates the same thresholds and logs a "calibration_retry" event with reason and attempt number
Retry Limit Reached Blocks Session Start
Given the user has completed 3 attempts within the Move‑Ready Check When the last attempt still fails to meet the thresholds Then session start remains disabled and a blocking message indicates calibration not achieved And the user is offered options: Adjust setup and try again, Switch device, or Exit session And a "calibration_block" event is logged with final reasons and total elapsed time
Prerequisite Validation: Permissions, Lighting, and Framing
Given Move‑Ready Check begins When camera permission is not granted Then the system requests permission and does not start the calibration timer until granted or explicitly denied And if denied, session start remains disabled and guidance to enable permission is shown When permission is granted, a pre‑check evaluates lighting score >= 0.60 (0–1 scale) and subject framing (person bounding box area 35–70% of frame; required joints visible) Then if any pre‑check fails, a corresponding micro‑cue is shown and the sample rep prompt is deferred until pre‑check passes
Per‑Exercise Calibration Hints Cache
Given an exercise has completed a Move‑Ready Check (success or failure) When the user returns to the same exercise on the same device and orientation within 14 days Then the most effective prior hints and last known good ranges (distance, angle category, lighting category) are loaded from on‑device storage in <= 300 ms And the cached hints are shown as prioritized pre‑cues before the rep prompt And no personal video frames are stored; only hint metadata is cached; cache entries expire after 14 days or when the exercise changes
End‑to‑End Time Budget
Given Move‑Ready Check starts with permissions granted When the user follows prompts and performs up to the allowed number of attempts Then the flow completes in <= 12 seconds on first‑attempt success and <= 20 seconds when retries are used (95th percentile across supported devices) And all UI responses to user actions render within 200 ms, and model inference per attempt completes in <= 800 ms on supported devices And no network dependency is required to pass calibration; if the network is unavailable, the flow still functions
Micro-Cues with Multimodal Accessibility
"As a user with varying needs, I want brief visual and audio cues during setup so that I can quickly fix issues without reading long instructions."
Description

Provides brief, unobtrusive setup cues via on-screen tips, optional voice prompts, and haptic feedback. Supports localization, captions, adjustable voice volume, and respects system accessibility settings and Do Not Disturb. Allows skip/replay of cues, uses lay language, and prioritizes minimal cognitive load to keep the setup under 20 seconds.

Acceptance Criteria
On-Screen Micro-Cues Display and Brevity
Given the Move‑Ready Check has loaded When on-screen micro-cues are displayed Then the number of cues is between 1 and 3 And each cue text is ≤ 60 characters And each cue’s FKGL reading level is ≤ 6.0 And only one cue is shown at a time And cue overlays do not occlude the user’s tracked body ROI And cue text contrast ratio is ≥ 4.5:1 against its background
Optional Voice Prompts with Captions and Volume Control
Given voice prompts are enabled by the user When any micro-cue is shown Then synchronized TTS plays within 250 ms of cue display And a volume control adjusts using system media volume APIs And captions are available and can be toggled on/off And captions auto-enable if system captions/subtitles accessibility is on And caption text matches spoken prompt ≥ 99% word accuracy And if Do Not Disturb or device mute is active, no voice plays and captions display instead
Haptic Feedback Configuration and System Respect
Given haptic feedback is enabled by the user When a micro-cue begins or the test rep succeeds Then a gentle haptic pulse (10–30 ms) is emitted And haptic intensity follows system haptics settings And if system haptics are disabled, no vibration occurs And if Do Not Disturb is active, no vibration occurs And the user can toggle haptics on/off within the Move‑Ready Check
Skip and Replay Controls Accessibility
Given the Move‑Ready Check micro-cues are active When the user needs control over cue playback Then a Skip control is visible, labeled, and one‑tap actionable (≥44×44 pt) And a Replay Last Cue control is visible, labeled, and one‑tap actionable (≥44×44 pt) And both controls are operable with screen readers (proper role, name, focus order) And activating Skip immediately advances to the test rep without additional dialogs And activating Replay repeats the last cue within 500 ms
Localization and Lay Language Coverage
Given the device locale is supported (en, es, fr, de at minimum) When micro-cues are displayed or spoken Then the language matches the device locale with 100% string coverage And if the locale is unsupported, English is used as a fallback And RTL locales render correctly with mirrored layout and correct reading order And micro-cue text avoids medical jargon and uses plain language (FKGL ≤ 6.0) And terminology is consistent across on-screen, voice, and captions for each locale
Setup Flow Duration ≤ 20 Seconds
Given a first-time or returning user starts Move‑Ready Check When they complete the micro-cues and test rep Then instrumented telemetry shows median completion time ≤ 15 s And P90 completion time ≤ 20 s across a sample of ≥ 50 sessions per platform And enabling/disabling voice or haptics does not increase P90 beyond 20 s And no cue waits on user input longer than 5 s without offering Skip
Screen Reader and Accessibility Settings Compatibility
Given system screen reader (VoiceOver/TalkBack) is enabled When micro-cues are presented Then each cue is announced via system TTS with correct semantic focus And announcements do not overlap; next cue waits until current announcement ends And interactive elements support Dynamic Type up to 200% without truncation And Reduce Motion setting removes non-essential animations during cues And all controls have accessible names, roles, and logical focus order
Clinician Controls & Bypass Rules
"As a clinician, I want to configure how strict the Move‑Ready Check is for each program so that my patients aren’t blocked unnecessarily while maintaining data quality."
Description

Enables clinicians to configure Move‑Ready Check strictness per program (e.g., required joints, minimum lighting thresholds, retry limits) and define bypass rules for trusted patients or in-clinic sessions. Provides safe default presets, audit logs of overrides, and remote updates via the clinic portal with versioned configurations to ensure consistent behavior across devices.

Acceptance Criteria
Safe Default Preset Fallback
Given no clinic-defined Move‑Ready configuration exists for the program, When the patient launches the Move‑Ready Check, Then the app loads the Safe Default preset with requiredJoints=["shoulders","hips","knees"], lightingScoreMin=0.50, confidenceMin=0.75, retryLimit=2, and movementTest enabled. Given the Safe Default preset is applied, When the check runs, Then rep counting is permitted only if all default thresholds are met and the default preset version is recorded in session telemetry. Given connectivity is restored and a clinic preset becomes available mid-session, When the current Move‑Ready Check is in progress, Then the app completes with the defaults and applies the clinic preset only on the next program load.
Required Joints Detection Enforcement
Given requiredJoints=["shoulders","knees"] and confidenceMin=0.80 configured for the program, When the framing test runs, Then all required joints must be detected with average confidence >=0.80 over any continuous 2-second window to pass. Given at least one required joint is below threshold, When the check evaluates, Then the check fails and identifies the missing joint(s) in the micro-cue. Given the user adjusts position and the next 2-second window meets threshold, When reevaluated, Then the check passes without additional user input.
Lighting Threshold Enforcement with Retry Limits
Given minLightingScore=0.55 and retryLimit=2 for the program, When lightingScore<0.55, Then the app blocks progression, displays lighting guidance, and sets attemptCount=1. Given lightingScore remains <0.55 on retry, When attemptCount reaches 2, Then the app hard-stops the Move‑Ready Check for this session unless a bypass rule applies and logs the failure. Given lightingScore>=0.55 on a retry before reaching the limit, Then the lighting step passes and remaining retries are not further decremented. Given a hard-stop is triggered, Then an event is logged with patientId, programId, lightingScore, threshold, attemptCount, and timestampUTC.
Trusted Patient Bypass Rule
Given patient.trusted=true and bypassTypes=["lighting","framing"] with bypassPerSession=1 configured, When the Move‑Ready Check fails for a covered type, Then the app offers a Proceed Anyway control and displays remaining bypass count. Given the patient selects Proceed Anyway with remaining bypasses>0, When the session starts, Then the app logs a bypass with type, reason, configVersion, actor=patient, and timestampUTC, and proceeds to rep counting. Given a bypass has already been used in the session, When another covered failure occurs, Then Proceed Anyway is not offered and standard failure handling applies. Given a failure type not listed in bypassTypes (e.g., camera permission denied), When the check fails, Then bypass is not offered.
In‑Clinic Session Bypass Rule
Given session.context="in_clinic" and inClinicPolicy="skip_quality_checks" configured, When the Move‑Ready Check starts, Then lighting and framing checks are skipped but camera permission checks still run. Given inClinicPolicy="relax_thresholds" with lightingScoreMin=0.40 and confidenceMin=0.70, When the check runs, Then the relaxed thresholds are applied and enforced. Given any in-clinic bypass or relaxation occurs, Then an audit record is created with clinicId, programId, patientId, policy, thresholds, deviceId, and timestampUTC.
Remote Update Propagation and Version Consistency with Rollback
Given a clinician publishes configuration version 1.2 for Program X, When a connected device loads Program X, Then it fetches and applies v1.2 within 30 seconds and shows v1.2 in diagnostics. Given the same patient loads Program X on a second device within 10 minutes, Then both devices report and enforce the same version v1.2. Given a device is offline, When the program loads, Then the last cached version is used and a using-cached-config event is logged; upon reconnection, the device fetches the latest version within 5 minutes. Given the clinician rolls back to v1.1, When the next session loads on any device, Then v1.1 is applied within 30 seconds and supersedes v1.2.
Comprehensive Audit Logging for Config Changes and Bypasses
Given a clinician creates, edits, publishes, or rolls back a configuration, When the action is saved, Then an immutable audit entry is created with id, actorId, role, action, programId, patientScope (if any), oldVersion, newVersion, diff, timestampUTC, and origin (portal or device). Given a patient or clinician uses a bypass, When the session starts, Then an audit entry records bypassType, reason, policy, actor, timestampUTC, configVersion, and outcome (proceeded/blocked). Given audit entries exist, When queried in the clinic portal by program and date range, Then results return within 3 seconds for up to 10,000 records and can be exported as CSV. Given an audit entry is displayed via UI or API, Then no fields are editable; modification attempts return HTTP 403 and are themselves logged as security events.
Outcome Telemetry & Support Artifacts
"As a product or clinic admin, I want aggregated outcomes of the Move‑Ready Check so that we can troubleshoot issues and improve first-try success rates."
Description

Captures structured, privacy-preserving events for each check (permissions, lighting, framing, test rep) with timestamps, pass/fail, and reason codes, and surfaces them in clinician dashboards and product analytics. Supports proactive patient nudges when checks fail, adheres to retention policies, and requires explicit consent for any optional media capture. Enables funnel analysis to reduce support contacts and improve first-try success rates.

Acceptance Criteria
Telemetry Event Schema & Logging for Check Steps
Given Move-Ready Check is initiated When the permissions check completes Then emit a 'move_ready_check_step' event with fields: step='permissions', outcome in {'pass','fail'}, reason_code or 'none', timestamp ISO-8601 with ms, duration_ms, session_id (UUIDv4), patient_pid (HMAC-SHA256), tenant_id, app_version, device_os, device_model, idempotency_key (UUIDv4) Given the lighting check completes When outcome is computed Then emit the same event with step='lighting' and populate fields as above and no media payload present Given the framing check completes When outcome is computed Then emit the same event with step='framing' and populate fields as above and no media payload present Given the test rep check completes When outcome is computed Then emit the same event with step='test_rep' and populate fields as above, including rep_detected boolean and latency_ms, and no media payload present Given events are produced When network is online Then p50 delivery latency <= 5s and p95 <= 30s to server acknowledgement Given events are produced offline When connectivity returns within 72h Then all queued events are delivered exactly-once using idempotency_key and duplicates are suppressed server-side
Consent-Gated Optional Media Capture
Given optional media capture is disabled by default When patient has not granted consent Then no media (images, video, audio) is persisted, transmitted, or included in telemetry or artifacts Given a consent prompt is shown When patient explicitly accepts Then record consent with fields: consent_id, version, scopes={'optional_media'}, timestamp, actor=patient, and enable media capture only within the declared scopes Given patient revokes consent When revocation is confirmed Then disable further media capture immediately and record a 'consent_revoked' event and delete unprocessed optional media within 24h Given no consent exists When support artifacts or analytics are generated Then only derived, non-identifying metrics are available and no frames or keypoints are stored beyond session memory
Retention & Purge Policy Compliance
Given production retention_days=90 When server time crosses the daily purge window Then telemetry older than 90 days is hard-deleted from primary stores and their indexes Given a manual deletion request for a patient is received When the request is authorized Then associated telemetry is deleted within 7 days and an audit log entry is created Given deletion is executed When analytics aggregates exist Then pre-aggregated, fully de-identified metrics remain while row-level source events are removed Given retention_days is changed via configuration When the new value is deployed Then the purge job enforces the updated window starting next run and logs the applied value
Clinician Dashboard Surfacing of Move-Ready Check Outcomes
Given a clinician with role 'PT' opens a patient profile When the Move-Ready Check widget loads Then show the latest attempt within 7 days with per-step status, timestamps, and reason_codes Given multiple attempts exist When the clinician selects a date range Then list attempts in reverse chronological order with pagination and total count Given an attempt is viewed When the clinician clicks 'Troubleshooting' Then display reason-specific guidance linked to each failed step Given access control is enforced When a user lacks 'view_patient_events' permission Then the widget and data are not visible and a 403 is returned Given normal load conditions When the widget queries data Then p95 server response time <= 2s for up to 50 attempts
Proactive Patient Nudges on Failed Checks
Given a step fails during Move-Ready Check When outcome='fail' is emitted Then send a reason-specific nudge to the patient within 1 minute and log 'nudge_sent' with correlation_id to the failing event Given a nudge was sent for a reason within the last 12 hours When the same reason fails again Then do not send a duplicate nudge and log 'nudge_suppressed' with suppression_reason='cooldown' Given the patient opted out of notifications When a step fails Then no nudge is sent and 'nudge_suppressed' is logged with suppression_reason='opt_out' Given a nudge is delivered When the next attempt passes all steps Then send a single success reinforcement message and log 'nudge_followup_sent'
Product Analytics & Funnel Reporting Enablement
Given raw events are ingested When hourly ETL runs Then events are available in the analytics warehouse with schema fields: session_id, attempt_id, step, outcome, reason_code, event_timestamp, device_os, app_version, tenant_id Given analytics dashboards are refreshed When a product analyst opens the Move-Ready Funnel report Then show metrics: first_try_success_rate, step drop-off rates, reason_code distribution, and device/app segmentation Given backfill from the last 30 days is executed When validation queries run Then event counts between source and warehouse match within 1% per day Given data is updated When freshness is measured Then max end-to-end latency from event to dashboard is <= 60 minutes
Support Artifact Generation & Access Control
Given a Move-Ready Check attempt completes When support artifact generation is triggered Then create an artifact containing attempt_id, per-step outcomes, timestamps, reason_codes, app_version, device_os, and exclude any media unless consent scope includes 'optional_media' Given a support user with role 'Support' requests an artifact When access is granted Then return the artifact within 2 seconds and log an audit record with user_id, timestamp, and purpose Given an unauthorized user requests an artifact When access is evaluated Then deny with 403 and log the attempt Given a shareable link is created When 24 hours elapse Then the link expires and further access is blocked

Role Blueprints

Prebuilt, clinic-tunable roles for PTs, PTAs, caregivers, and payers with the right defaults from day one. Assign in seconds, avoid permission sprawl, and ensure each person sees only what they need to act—nothing more.

Requirements

Blueprint Catalog & Defaults
"As a clinic admin, I want to choose from prebuilt role blueprints so that staff and external collaborators get appropriate access from day one."
Description

Provide a curated catalog of prebuilt role blueprints (PT, PTA, Caregiver, Payer, Clinic Admin) with least-privilege defaults aligned to MoveMate workflows. Each blueprint defines permissions (e.g., view PHI, edit care plans, create exercises, view rep counts and form flags, send nudges, export billing), default dashboard widgets, notification settings, and data scopes. Integrate the catalog into clinic onboarding, invite flows, and user management so assignments take seconds and new users see the right views immediately.

Acceptance Criteria
Catalog appears in Onboarding with Prebuilt Blueprints
Given a new clinic reaches the Role Setup step during onboarding When the catalog loads Then it displays exactly five prebuilt blueprints: PT, PTA, Caregiver, Payer, Clinic Admin And each blueprint card shows name, one-line summary, and a “View Details” control And “View Details” reveals permissions, default dashboard widgets, default notification settings, and data scope for that blueprint And the catalog content loads in ≤2 seconds on a 3G Fast network And all interactive elements are keyboard accessible and have accessible names And selecting a blueprint and clicking Continue advances to the next step with the selection persisted
Assign Blueprint During User Invite
Given a Clinic Admin opens the Invite User dialog When they enter the user’s email and select a blueprint Then the invite can be completed in ≤3 clicks after email entry and blueprint selection And the server responds with 2xx and the assigned blueprint is stored on the pending user record And the invite email includes the role name and a one-sentence capability summary And the operation completes in ≤5 seconds end-to-end (request to response) And an audit log event USER_ROLE_ASSIGNED is recorded with actor, target email, role, timestamp, and clinic ID
Least-Privilege Permission Matrix by Role
Rules: - PT: May view PHI for patients in assigned clinics; may create/edit exercises and care plans for assigned patients; may view rep counts and form flags; may send nudges to assigned patients; may not export billing by default; may not manage clinic-wide settings. - PTA: May view PHI for patients in assigned clinics; may create exercises and edit care plans as delegated; may view rep counts and form flags; may send nudges to assigned patients; may not export billing; may not manage clinic-wide settings. - Caregiver: May view only their assigned patient’s exercises, rep counts, and form flags; may acknowledge/respond to nudges; may not view other patients; may not edit care plans or create exercises; may not export billing; PHI access restricted to their assigned patient only. - Payer: May export billing reports limited to billing fields and required patient identifiers; may view de-identified aggregate adherence/outcomes; may not view full patient PHI; may not send nudges; may not edit care plans or create exercises. - Clinic Admin: May manage users, role assignments, and clinic settings; may not edit clinical care plans by default; PHI access is disabled by default unless explicitly granted via added scope. - All roles: Denied actions return 403 and are security-logged with actor, action, resource, timestamp.
Default Dashboards Applied on First Login
Given a newly invited user accepts the invite and logs in for the first time When their home dashboard renders Then the default widgets for their assigned blueprint are displayed And initial dashboard render completes in ≤2 seconds on a 3G Fast network And PT/PTA widgets include: Today’s Patients, Adherence & Flags, Nudge Queue And Caregiver widgets include: Today’s Exercises, Form Tips And Payer widgets include: Billing Export, Adherence Overview (de-identified) And Clinic Admin widgets include: User Management, Role Assignments, Integration Status And any subsequent user customizations persist without altering the blueprint defaults for future assignees
Default Notification Settings Per Role
Given a user is created with a selected blueprint When the user record is saved Then notification preferences are initialized to the blueprint defaults And PT/PTA defaults include: daily adherence digest on weekdays; immediate critical form error alerts And Caregiver defaults include: exercise reminders per prescription schedule; immediate nudge replies And Payer defaults include: weekly billing export availability summary And Clinic Admin defaults include: role changes and failed export alerts And recipients can later opt out or modify channels without changing the blueprint template And notifications fire within ≤60 seconds of qualifying triggers and are logged with delivery status
Data Scope Enforcement
Given a user with an assigned blueprint and clinic/patient scopes When they request data via UI or API Then results are limited to the defined clinic/location/patient scopes And attempts to access resources outside scope return 403 and create a security log And row-level filters are enforced server-side for all list and detail endpoints And changing a user’s clinic assignment or role updates effective scope within ≤60 seconds And direct URL access to out-of-scope patient IDs is blocked and logged
Role Reassignment via User Management
Given an existing user is selected in User Management When a Clinic Admin changes their assigned blueprint and saves Then the update returns a 2xx response and persists to the user record And active sessions reflect new permissions within ≤60 seconds or on next request And the user is prompted to refresh if their current view becomes unauthorized And default dashboard and notification settings switch to the new blueprint while preserving user customizations unless the admin selects “Reset to Defaults” And an audit log USER_ROLE_CHANGED is recorded with previous role, new role, actor, timestamp, and clinic ID
Clinic-Tunable Overrides with Guardrails
"As a clinic owner, I want to tailor role permissions to my workflows without risking HIPAA violations so that my team has exactly what they need and nothing more."
Description

Allow clinics to tailor any blueprint via a point-and-click permission editor with scoped toggles (patient cohort, clinic, read/write), without editing the global template. Provide safety guardrails: risk indicators for PHI exposure, dependency validation (e.g., 'Edit Plan' requires 'View Plan'), policy hints, and hard stops for non-compliant combinations. Store overrides as a per-clinic layer that survives global blueprint updates and expose a diff viewer to compare clinic overrides vs. defaults.

Acceptance Criteria
Point-and-Click Scoped Permission Toggles
Given I am a Clinic Admin on the Role Blueprints page And I open the permission editor for a role When I set Permission "Patient Notes" to Read at patient cohort scope "Post-Op Knees" And I set the same permission to Write at clinic scope "Downtown Clinic" Then the editor displays Read at "Post-Op Knees" and Write at "Downtown Clinic" for that permission And saving applies these overrides to the clinic's effective permissions
Clinic-Level Overrides Do Not Modify Global Template
Given two clinics (Clinic A and Clinic B) share the same default role blueprint When Clinic A saves an override for Permission "Export Patient Data" to Disabled Then the global template's permissions remain unchanged And Clinic B's effective permissions remain at the default And an override record exists for Clinic A for that permission
Risk Indicators and Policy Hints for PHI Exposure
Given a permission classified with PHI risk metadata exists When I enable a high-risk action (e.g., Export) for that permission Then a visible risk indicator with severity label "High" appears next to the toggle And a policy hint is displayed with a one-line rationale and a link to the full policy And the indicator and hint disappear when the permission is reverted to a compliant state
Dependency Validation and Auto-Resolution
Given "Edit Treatment Plan" depends on "View Treatment Plan" When I enable "Edit Treatment Plan" while "View Treatment Plan" is disabled Then I am prompted to also enable "View Treatment Plan" And if I confirm, both permissions are enabled And if I decline, "Edit Treatment Plan" remains disabled When I attempt to save with any unmet dependencies Then the save is blocked and a list of missing prerequisites is shown
Hard Stops for Non-Compliant Combinations
Given one or more selected permissions violate defined compliance rules When I attempt to save the changes Then the save is prevented And each violating permission is listed with the specific rule breached And no partial changes are applied until all violations are resolved
Overrides Persist Across Global Blueprint Updates
Given Clinic A has saved overrides for a role And the global blueprint for that role is updated When Clinic A opens the role again Then Clinic A's overrides remain intact and effective And a banner indicates the global template changed with a link to review differences
Diff Viewer of Clinic Overrides vs Defaults
Given Clinic A has at least one override on a role When I open the diff viewer Then the viewer highlights Added, Removed, and Changed permissions by scope And I can filter the diff by scope (cohort, clinic) and permission category And I can export the diff as JSON and PDF
Fast Assignment & Bulk Apply
"As a clinic admin, I want to assign roles in bulk during onboarding so that I can get my whole team set up quickly."
Description

Enable single- and bulk-assignment of blueprints during user invite, CSV import, and directory sync. Provide smart suggestions based on job title, email domain, or NPI, with one-click apply and undo. Support bulk updates for existing users, cohort-based assignment (e.g., assign caregiver access to a patient group), and API endpoints/Webhooks to automate assignments from external HR or EHR systems.

Acceptance Criteria
Single Invite: Smart Suggestion + One-Click Apply/Undo
Given an admin invites a new user with job_title = "Physical Therapist"; When the invite form loads; Then the "PT" blueprint is preselected and labeled "Suggested". Given the job_title does not match any mapping; When the email domain matches a clinic domain mapped to "PTA"; Then "PTA" is suggested as the top option. Given a valid NPI resolves taxonomy "Physical Therapy Assistant"; When suggestions are generated; Then "PTA" appears as the top suggestion. When the admin clicks "Apply"; Then the blueprint is assigned and persisted within 1 second and a success notification is displayed. When the admin clicks "Undo" within 5 minutes of Apply; Then the assignment is reverted and the original state is restored without residual permissions; and both actions are recorded in the audit log.
CSV Import: Bulk Assign with Preview & Errors
Given a CSV containing columns email, job_title, npi; When the admin maps columns and uploads; Then a preview shows per-row suggested blueprint and any validation errors. When the admin selects "Apply to all rows with High-confidence suggestions" and clicks Apply; Then only rows with deterministic mapping rules (exact title/domain/NPI match) are assigned. When Apply runs on 10,000 valid rows; Then processing completes at >= 1,000 assignments/min with 0 duplicate users and 0 duplicate assignments. Then rows with errors are skipped, surfaced with row numbers and reasons, and an error CSV is downloadable. Given the same CSV is re-uploaded with the same idempotency key; When Apply is executed again; Then no additional users or assignments are created (idempotent).
Directory Sync: Auto-Assign via Attribute Rules
Given SCIM provisioning delivers a user with title = "PT" and email domain mapped to "clinicA.com"; When the next sync cycle occurs; Then the "PT" blueprint is assigned within 10 minutes. Given a user is removed from a "Caregivers" directory group that maps to the "Caregiver" blueprint; When sync runs; Then the "Caregiver" blueprint is removed unless explicitly retained by another rule. When the same attribute payload is received repeatedly; Then assignments remain unchanged (idempotent) and no duplicate audit entries are created. Then every assignment/removal from sync includes correlation_id, source = "SCIM", and appears in the audit log.
Bulk Update: Filtered Replace and Conflict Handling
Given an admin filters users where job_title contains "assistant" and location = "Austin"; When "Select all" and "Replace blueprint with PTA" are chosen; Then the selected users’ blueprints are replaced and prior elevated permissions are revoked. Before execution; Then the UI displays counts: total selected, to be changed, conflicts (e.g., suspended users), and provides a downloadable conflict report. When executing a batch of up to 5,000 users; Then completion occurs within 5 minutes; transient failures are retried up to 3 times; no user ends in a partially updated state. After completion; Then a success summary shows changed count, skipped count, and a link to undo.
Cohort Assignment: Caregiver Access to Patient Group
Given a patient cohort "Post-Op Knee 2025" exists and caregivers A and B are selected; When assigning the "Caregiver" blueprint scoped to that cohort; Then caregivers only see patients in that cohort and cannot access other patient charts. When cohort membership changes (e.g., a patient is removed); Then caregiver access to that patient is revoked within 2 minutes across web and mobile. When the cohort-scoped assignment is removed; Then the caregivers lose access within 2 minutes and audit logs record patient IDs removed. Then no PHI beyond minimum necessary (view-only unless the blueprint includes edit) is accessible to caregivers.
API & Webhooks: Automated Assignments
Given a client with OAuth2 scope roles.assign and an HMAC webhook secret configured; When POST /api/v1/role-assignments with {user_id, blueprint_id, idempotency_key} is sent; Then response is 201 with assignment_id; and a repeated POST with the same idempotency_key returns 200 with no duplicate assignment. When an assignment is created, updated, or removed; Then a webhook event role.assignment.changed is delivered within 30 seconds; failures are retried with exponential backoff up to 24 hours; each event carries a valid signature header. When rate limit 600 requests/min is exceeded; Then the API returns 429 with a Retry-After header; subsequent requests after the window succeed. Then API error responses include stable error codes and fields (code, message, request_id) for cases such as user_not_found, invalid_blueprint, and forbidden.
Audit, Undo, and Access Safety Nets
For every assignment operation (invite, CSV, sync, API); Then an immutable audit record captures actor (or system), timestamp, source, before/after roles, scope, and batch_id, and is queryable by time range and user. Given a bulk operation completed in the last 24 hours; When the admin clicks "Undo"; Then all changes from that operation are rolled back atomically; failures are retried and surfaced in a detailed report. After any assignment; Then the user's effective permissions exactly match the target blueprint and scope; any extra permissions are removed; a verification job runs and logs success per user. Given a non-Role Admin attempts any assignment action; When the action is attempted; Then the request is denied with 403 and an audit entry is created.
Permission Matrix & Scope Controls
"As a security-conscious admin, I want a clear permission matrix and scoping options so that each role only sees the patients and data they should."
Description

Deliver a human-readable permission matrix mapping capabilities to each blueprint (e.g., View Patient Profile, View CV Rep Counts, Edit Exercise Plan, Annotate Form Errors, Send Nudge, Access Billing, Export Reports). Support fine-grained scoping by clinic, therapist, patient cohort, and time window, enforcing minimum-necessary access with row-level security. Expose previews ('impersonate') to validate what a role sees across dashboards, exercise videos, and adherence nudges before rollout.

Acceptance Criteria
Default Permission Matrix for Core Blueprints
Given the system has default blueprints PT, PTA, Caregiver, and Payer When an admin opens the Permission Matrix Then the matrix lists capabilities [View Patient Profile, View CV Rep Counts, Edit Exercise Plan, Annotate Form Errors, Send Nudge, Access Billing, Export Reports] with human-readable labels and per-blueprint Allowed/Denied indicators And the default mapping is: PT [Allow: View Patient Profile, View CV Rep Counts, Edit Exercise Plan, Annotate Form Errors, Send Nudge; Deny: Access Billing, Export Reports]; PTA [Allow: View Patient Profile, View CV Rep Counts, Annotate Form Errors, Send Nudge; Deny: Edit Exercise Plan, Access Billing, Export Reports]; Caregiver [Allow: View Patient Profile (assigned patient only), View CV Rep Counts (assigned patient only); Deny: Edit Exercise Plan, Annotate Form Errors, Send Nudge, Access Billing, Export Reports]; Payer [Allow: Access Billing, Export Reports; Deny: View Patient Profile, View CV Rep Counts, Edit Exercise Plan, Annotate Form Errors, Send Nudge] And each capability includes a tooltip that describes its effect and scope rules in plain language
Clinic and Therapist Scope Controls
Given the PT blueprint is scoped to Clinic A and to "own caseload" by default When a PT from Clinic A signs in Then they see only patients in Clinic A who are assigned to them And the option to expand scope to "Clinic A (all therapists)" is disabled unless explicitly granted in the role settings And attempting to access a patient from Clinic B via deep link returns 403 and displays "Access denied by scope"
Cohort and Time-Window Scoping
Given the user sets scope to patient cohort "Post-Op Knee" and time window "Last 30 days" When viewing dashboards, exercise videos, adherence nudges, and preparing an export Then all lists, aggregates, and media are limited to patients tagged "Post-Op Knee" within the last 30 days And data outside the selected cohort or time window is not displayed or included in exports And changing the time window to "Last 7 days" updates all visible results and export previews within 2 seconds
Row-Level Security Enforcement Across Surfaces and API
Given the user lacks access to patient P123 due to role scope When they attempt to view the patient profile via UI, open an exercise video URL directly, read a nudge thread by ID, or call GET /patients/P123 Then each attempt is blocked with HTTP 403 and no patient-identifiable data is returned And aggregate widgets and totals exclude P123's data consistently across dashboards and exports
Impersonation Preview Read-Only Validation
Given an admin selects "Preview as PTA" within Clinic A When impersonation mode starts Then a persistent banner shows "Impersonating PTA in Clinic A" with an explicit exit control And all pages render exactly the permissions and scope of the PTA role And write actions (edit exercise plan, annotate form errors, send nudge, access billing, export reports) are disabled in UI and rejected server-side And exiting preview restores the admin's permissions And an audit log records impersonation start/stop, actor, role, clinic, and timestamps
Role Assignment Speed and Anti-Sprawl Safeguards
Given a new user is created and assigned the PT blueprint via the assignment wizard When the admin completes assignment Then the role is applied with default permissions in 3 clicks or fewer and within 15 seconds excluding network latency And default scope is minimum-necessary (selected clinic, therapist scope = own caseload, no cohort/time window pre-expanded) And a "changes preview" displays the capability diff versus defaults prior to save And only capabilities explicitly toggled by the admin are granted; no additional capabilities are implicitly enabled
Billing and Export Access Restrictions with Scope
Given a PT is logged in When navigating to Billing or Export Reports Then both sections are hidden in navigation and direct URL access returns HTTP 403 Given a Payer is logged in with scope Clinic A and time window "Last Quarter" When opening Billing and exporting a report Then the export contains only rows for Clinic A within the selected window and excludes patient PII fields beyond permitted billing identifiers And export attempts outside the current scope or time window are disallowed
Audit Trails & Change History
"As a compliance officer, I want a full audit trail of role changes so that we can satisfy HIPAA audits and detect risky changes."
Description

Record immutable audit logs for blueprint creation, edits, assignments, scope changes, and deletions, including actor, timestamp, before/after values, reason note, and originating IP/device. Provide searchable filters, export to CSV/SIEM, and alerts for anomalous changes (e.g., sudden grant of export permissions). Surface per-user access history to support HIPAA audits and incident response.

Acceptance Criteria
Immutable Log Entry on Blueprint Changes
Given a permitted user creates, edits, assigns, changes scope of, or deletes a Role Blueprint When the operation is successfully committed Then exactly one audit record is appended and is immutable (no updates or deletes allowed) And the record contains: action_type, actor_id, actor_role, target_type=role_blueprint, target_id, target_name, timestamp (UTC ISO-8601), before_values, after_values, reason_note, ip_address (normalized), device_info/user_agent, request_id/correlation_id And attempts to modify or delete the audit record return 403 and are themselves audited And the record is queryable via UI and API within 5 seconds of commit
Search and Filter Audit Trail
Given audit records exist for Role Blueprint activities When a user with view permissions applies filters (date range, action_type, actor, target_id/name, field_changed, ip_address, device, has_reason_note, text search on reason_note/target_name) Then the results contain only records matching all filters And results are sortable by timestamp ascending/descending and paginated with stable cursors And for a filter returning ≤10,000 records, the first page loads in ≤2 seconds And exporting respects the same filters (see export criteria)
CSV and SIEM Export of Audit Logs
Given a user with audit_export permission requests an export for a specified filter and time range When CSV export is initiated Then a UTF-8 CSV is generated with headers: action_type, actor_id, actor_role, target_type, target_id, target_name, timestamp_utc, before_values, after_values, reason_note, ip_address, device_info, request_id And the CSV row count equals the number of records exported and timestamps are UTC ISO-8601 And the export event (who, when, filter, row_count) is itself audited And users without audit_export receive 403 and the denial is audited Given SIEM streaming is configured with endpoint and HMAC secret When an audit record is created Then it is delivered to the SIEM within 60 seconds with HMAC-SHA256 signature and retry with exponential backoff up to 24 hours on failure
Anomalous Permission Change Alerting
Given alerting is enabled and recipients are configured When a Role Blueprint change grants or widens permissions matching a high-risk rule (e.g., adds audit_export or expands patient_data_export or elevates scope from clinic to organization) Then an alert is generated within 60 seconds and delivered to all channels (email and SIEM webhook) with actor, target, before/after, timestamp, ip_address And duplicate alerts for the same actor-target-action within 5 minutes are deduplicated And alert emission and any acknowledgements are audited And disabling an alert rule requires a non-empty reason_note and is audited
Per-User Access History View
Given a compliance officer opens the Access History for a specific user When the timeline is requested for a date range Then the view lists all Role Blueprint assignments, removals, scope changes, and permission grants affecting that user with actor, timestamp (UTC), before/after, ip_address, device_info And results are filterable by action_type and exportable (subject to export permission) And users without access_history_view for others receive 403; users may view their own history And the first page (≤100 items) loads in ≤2 seconds; pagination is supported for larger histories
Reason Note Enforcement for High-Risk Changes
Given a user attempts a high-risk Role Blueprint change (granting export permissions or widening scope) When submitting the change via UI or API Then a non-empty reason_note of at least 10 characters is required; otherwise the change is rejected with 400 and validation message And the captured reason_note is stored in the audit record and is immutable And low-risk changes allow an optional reason_note
Tamper-Evidence and Integrity Verification
Given the audit store is append-only with chained content hashes When an integrity check is executed via admin API Then the API returns status OK with latest chain hash and coverage window if no tampering is detected And any gap or hash mismatch returns status FAIL and emits a critical alert And a daily automated integrity check runs and its result (OK/FAIL, timestamp) is audited
Blueprint Versioning & Rollback
"As an admin, I want to version and rollback role blueprints so that I can safely iterate without disrupting care."
Description

Version every blueprint and clinic override with semantic versioning, change notes, and impact preview listing affected users and permissions. Allow staged rollouts, scheduled effective dates, and instant rollback to prior versions. Provide a migration assistant to reconcile conflicts between new defaults and clinic overrides, with safe defaults that preserve least-privilege.

Acceptance Criteria
Publish New Blueprint Version with Semantic Versioning
Given I have permission to manage role blueprints or clinic overrides When I publish a change and provide a version in the format MAJOR.MINOR.PATCH and change notes of at least 10 characters Then the system validates the version matches ^[0-9]+\.[0-9]+\.[0-9]+$ and is greater than the latest released version for that item And duplicate or out-of-order version numbers are rejected with an actionable error And the version, notes, author, and timestamp are saved immutably and appear in version history with status "Released"
Impact Preview Before Publishing
Given a pending publish for a role blueprint or clinic override When I open the Impact Preview Then the preview lists the total number of affected users, their identifiers, and the exact permission additions and removals by role And any permission escalation relative to current effective permissions is explicitly flagged And Publish remains disabled until I acknowledge the preview
Staged Rollout to Pilot Cohort
Given a new released version exists When I configure a staged rollout by selecting a pilot cohort (by user IDs or group tag) and a rollout start time Then only the selected cohort is migrated to the new version at the configured time And non-cohort users remain on their current effective version And rollout status displays migrated and remaining user counts, updating at least every 60 seconds And I can pause, expand, or stop the rollout without reverting already migrated users
Scheduled Effective Date Activation
Given a new version is ready to be applied When I schedule an effective date and time in the clinic’s local timezone Then the version becomes effective at that time, and targeted users’ permissions reflect the new version within 5 minutes And scheduling a time in the past is rejected with a clear error And admins can reschedule or cancel before the effective time with actions recorded in the audit log
Instant Rollback to Prior Version
Given a newer version is currently effective When I select a prior version and confirm Rollback Then the system reverts affected users to the selected prior version and removes any permissions introduced after that version within 2 minutes And a rollback audit entry is created with initiator, reason, target version, and affected user count And the rolled-back version remains available in history for future application
Migration Assistant Conflict Resolution
Given a new default blueprint version conflicts with one or more clinic overrides When I launch the Migration Assistant Then it lists each conflict with current effective permission, proposed change, and a safe default that preserves or reduces access (never escalates) And unresolved conflicts block publishing with a visible count of remaining items And the final resolution summary is stored with the version and included in the audit log
Version Diff and Change Notes Visibility
Given two versions of a blueprint or override When I view their diff Then I see a human-readable list with counts of added and removed permissions by role, and unchanged items are collapsed but expandable And change notes, author, and timestamp for each version are displayed and searchable And the diff view is accessible from version history and impact preview
External Access via Secure Invites
"As a physical therapist, I want to grant a caregiver read-only, patient-scoped access via a secure invite so that they can support home exercises without exposing other data."
Description

Provide secure, time-bound, patient-scoped access for caregivers and payers via magic-link invitations with optional 2FA and automatic expiry. Enforce masked PHI where appropriate, watermark sensitive screens, and limit actions to read-only or predefined tasks (e.g., attest adherence, view progress trends). Allow revocation, refresh, and audit of invite status, and ensure experiences are optimized for mobile web so external users can act without full app enrollment.

Acceptance Criteria
Magic-Link Invite Creation (Patient-Scoped, Time-Bound)
Given a clinician with PT or PTA Role Blueprint selects a single patient and an external role (Caregiver or Payer) When they create an invite with an expiry between 1 and 30 days and select allowed tasks (read-only, attest adherence, view progress trends) Then the system generates a single-use, patient-scoped magic link tied to the chosen role and expiry And stores only a hashed token server-side and sets invite status to "Pending" And delivers the invite via the chosen channel (email or SMS) within 60 seconds And the invite message contains no PHI beyond clinic name and masked patient identifier (e.g., "J. Doe") And the link cannot be used to access any patient other than the selected patient
Optional 2FA on Magic-Link Access (Mobile Web)
Given an invite is configured with 2FA required by policy or toggled on by the inviter When the invitee opens the magic link on mobile web Then the system sends a 6-digit OTP to the invitee's contact and prompts for entry And the OTP expires in 5 minutes; maximum 5 attempts before a 15-minute lockout And on successful OTP entry, a session scoped to the patient and role is established and the magic link becomes unusable And when 2FA is disabled, the session is established immediately upon link open with the same scoping And any attempt to reuse the link after first successful session or after expiry returns HTTP 401
PHI Masking and Watermarked Sensitive Views
Given an external user is authenticated via a valid magic link When viewing any screen that includes PHI Then the patient identity is masked per role: Caregiver sees first name + last initial; Payer sees initials only; DOB reduced to year And a persistent diagonal watermark with invitee identifier (email/phone) and timestamp is rendered on all sensitive screens and generated PDFs And data export, download, and share actions are disabled for external users And screenshots or print-to-PDF of sensitive screens include the watermark
Role-Based Read-Only and Task-Limited Permissions
Given Role Blueprints are applied to invites When a Caregiver accesses the session Then they have read-only access to the exercise plan, instructions, and progress trends and may submit adherence attestations only And they cannot edit plans, view clinician notes, or access other patients; disallowed actions return HTTP 403 and are logged When a Payer accesses the session Then they see de-identified progress and utilization trends only (no exercise videos, no clinician notes, no raw media) and cannot submit attestations And any API request including a different patient_id is rejected with HTTP 403
Invite Expiry, Revocation, and Refresh Controls
Given an invite exists When the expiry time elapses Then any session created from that invite is invalidated within 60 seconds and the invite status transitions to "Expired" When a clinician revokes an invite Then active sessions are terminated within 60 seconds, the link is invalidated immediately, and status transitions to "Revoked" When a clinician refreshes an invite Then a new invite record is created with status "Pending", the prior invite is set to "Revoked", and the prior link is invalidated immediately And the UI displays remaining time to expiry in the clinic time zone with minute-level accuracy
Invite and Access Audit Logging
Given audit logging is required for compliance When an invite is created, sent, opened, 2FA attempted, session established, permission denied, attestation submitted, revoked, expired, or refreshed Then an immutable audit entry is recorded with timestamp (UTC), actor (user id or invitee), invite id, patient id, role, event type, and metadata (excluding OTP value) And authorized clinic users can filter audit entries by date range, patient, invite status, and role and export results to CSV And audit events are queryable within 5 seconds of occurrence
External Tasks and Mobile Web Performance
Given an external user on iOS Safari or Android Chrome opens a valid session over a 4G connection (≥10 Mbps) When the external dashboard loads Then Largest Contentful Paint ≤ 2.5s and Time to Interactive ≤ 3.5s on a mid-tier device, and primary tap targets are ≥ 44px with accessible labels When a Caregiver submits an adherence attestation for a specified date range (max 14 days) with optional comments Then the submission validates required fields, prevents duplicate attestations for the same range, and updates the clinician dashboard within 60 seconds When a Payer views progress trends Then the chart displays last 30 days of adherence % and rep totals aggregated by week, with PHI masked per role, and no app enrollment is required

Scoped Share

Time-boxed, context-limited access links that expose only a specific program, date range, or metric set. Perfect for payers or temporary caregivers—easy to grant, auto-expires, and revocable with one tap to prevent lingering access.

Requirements

Scoped Link Creation & Preview
"As a physical therapist, I want to generate a link that only shows a patient’s ACL rehab program metrics for the last 30 days so that a payer can review progress without seeing unrelated data."
Description

Enable clinicians and admins to generate time-boxed share links restricted to a specific patient, program(s), date range, and metric set. Include presets (e.g., last 30 days adherence, Program “ACL Phase 2”) and custom scopes. Allow optional usage limits and download permissions (view-only, CSV export allowed/blocked). Provide a preview pane showing exactly what recipients will see before issuing the link. Links carry signed, non-guessable tokens and human-readable labels. Integrates with existing program and metrics models and respects patient consent flags.

Acceptance Criteria
Create Link with Preset: Last 30 Days Adherence
Given I am a clinician with access to patient P When I select the preset "Last 30 days adherence" for patient P and open Preview Then the scope is set to patient=P, dateRange=[today-30d,today], metrics=[Adherence], programs=all programs assigned to P overlapping the date range And the UI fields reflect these values And once issued, the link returns only adherence metrics within the date range for those programs in both UI and CSV (if allowed)
Custom Scoped Link: Patient + Program + Date Range + Metrics
Given patient P has programs A (e.g., "ACL Phase 2") and B When I select program=A, date range R, and metrics set M and issue the link Then recipients can access only data for P intersect A within R for metrics in M via UI and API And requests for any data outside that scope are filtered or denied with 403 And invalid selections (program not assigned to P, unknown metric, end before start) are rejected with inline validation and no link is created
Usage Limits and Download Permissions
Given I set a usage limit N=5 and Download permission="View-only (CSV blocked)" When the link is accessed 5 times total Then the 6th attempt is denied with message "Link usage limit reached" and returns HTTP 429 with code LINK_USAGE_LIMIT_REACHED And CSV export buttons are hidden for recipients and CSV endpoints return 403 if called directly And if Download permission="CSV allowed", recipients can download a CSV containing only scoped data and columns defined by the selected metrics
Preview Pane Parity Before Issuance
Given I have configured the scope and permissions for a link When I open the Preview pane Then the dataset and controls in Preview exactly match what recipients will see after issuance And changing any scope parameter updates the Preview within 500 ms and reflects the change And upon first recipient access, the payload (data + permissions) matches the Preview payload signature generated at issuance
Signed, Non-Guessable Token and Label
Given I click "Create Link" Then the generated URL contains a URL-safe token with at least 128 bits of entropy and a server-verifiable signature And any tampering with the token or scope parameters invalidates the link and returns 401/403 without disclosing scope details And a human-readable label (3–60 characters) is required and displayed in the share management list and Preview
Consent Flag Enforcement
Given patient P has an active consent restriction that disallows sharing When I attempt to create a scoped link for P Then the creation is blocked with a consent-related message and no link is created And any existing scoped links for P deny access within 60 seconds of the restriction being set, returning 403 with code CONSENT_BLOCKED
Expiration and Revocation Behavior
Given a link with end date/time T When current system time reaches or exceeds T Then recipient access is denied within 60 seconds with message "Link expired" and HTTP 410, and the link is marked Expired in management UI And when I tap "Revoke" on an active link, the token becomes invalid within 10 seconds; new requests are blocked immediately and open sessions lose access on next action or refresh
Server-side Scope Enforcement & Data Minimization
"As a compliance officer, I want shared links to strictly limit access to only the specified datasets so that we reduce PHI exposure and meet regulatory requirements."
Description

Implement a permission layer that enforces link scopes on every API and query path. Constrain data by patient, program, date range, and metric whitelist; redact identifiers not required (e.g., DOB, contact info) unless explicitly included. Tokens map to least-privilege roles (read-only) and cannot be escalated by client hints. All exports, screenshots endpoints, and deep links respect the same constraints. Handle pagination and aggregations server-side to prevent overfetching. Compliant with HIPAA: PHI minimization, secure token storage, rotated signing keys.

Acceptance Criteria
Enforce Patient and Program Scope on All Reads
Given a valid Scoped Share token with patient_id=P and program_id=R When the client calls any GET endpoint that returns patient data with any combination of missing, mismatched, or broadened patient_id/program_id filters Then the response contains only records where patient_id=P and program_id=R And requests explicitly targeting a different patient or program return 403 Forbidden And endpoints without explicit filter parameters still return only P and R scoped data And attempts to include multiple patients/programs in a single request return 403 Forbidden
Apply Server-Side Date Range Constraint
Given a Scoped Share token with start_date=S and end_date=E (inclusive) When the client requests time-series, sessions, events, or reports for any date window Then every returned record has a timestamp between S and E inclusive And requests specifying a broader date window are intersected to [S,E] without leaking out-of-range data And pagination beyond the last in-range record returns empty pages (200 with empty result) and never spills past E And aggregated endpoints (e.g., counts, averages) compute only over records within [S,E]
Enforce Metric Whitelist
Given a Scoped Share token with a metric whitelist W (e.g., [reps,form_error_count]) When the client fetches metrics, summaries, charts, or exports Then only metrics in W appear in payloads and exports; all other metric fields are omitted And derived metrics are included only if derivable solely from metrics in W; otherwise they are omitted And aggregation endpoints compute only over metrics in W And attempts to request non-whitelisted metrics explicitly return 403 Forbidden or omit them without leakage (implementation-defined, but no disallowed data is returned)
PHI Redaction and Identifier Minimization
Given a Scoped Share token without explicit identifier allowance When the client accesses any endpoint returning patient or clinician fields Then protected identifiers (e.g., full name, DOB, phone, email, address, MRN, device IDs) are omitted or masked And media and export metadata contain no PHI (e.g., no EXIF with names or device serials) And if the token explicitly whitelists specific identifiers, only those enumerated fields are included; all others remain redacted And responses contain stable pseudonymous IDs required for data linkage (e.g., patient_id), but no unnecessary PHI fields
Read-Only Least-Privilege Token Behavior
Given a Scoped Share token mapped to a read-only role When the client attempts POST, PUT, PATCH, or DELETE to any endpoint Then the server returns 403 Forbidden and no mutation occurs And client-supplied hints (e.g., role=admin, scope=*, X-Privileged headers) do not change effective permissions And token exchange, escalation, or impersonation endpoints reject the token (400/403) and do not mint broader tokens And WebSocket or streaming endpoints (if any) are limited to read-only channels within the same scope
Exports, Screenshots, and Deep Links Respect Scope
Given a Scoped Share token When the client requests CSV/PDF exports, screenshot/image endpoints, or follows deep links to in-app views Then all generated artifacts include only data within the token's patient, program, date, and metric scope And file names and on-artifact labels contain no redacted identifiers by default And deep links to outside-scope resources return 403 Forbidden And revoking the share immediately invalidates further export/screenshot/deep-link requests (subsequent calls return 401/403)
Server-Side Pagination and Aggregations Prevent Overfetch
Given a Scoped Share token with defined scope When the client paginates with arbitrary page_size and page tokens Then the server enforces a max page_size and returns only in-scope records for each page And next/prev page tokens never yield out-of-scope records And total_count and aggregation summaries reflect only in-scope records and are consistent with enumerating all pages And attempts to use large page_size or scan parameters do not increase data returned beyond scope or limits
Secure Token Expiry, Revocation, and Signing Key Rotation
Given a Scoped Share token with exp and a currently active signing key When the token is expired or explicitly revoked Then all requests return 401 Unauthorized with no data leakage And key rotation introduces a new signing key; tokens signed by previous keys validate only until their exp while JWKS (or equivalent) advertises both keys during the grace period And newly issued tokens after rotation are signed by the new key (identifiable by kid or equivalent) And stored tokens (or refresh artifacts, if any) are not retrievable in plaintext via any admin or support API
Auto-Expiry and One-Tap Revocation
"As a therapist, I want to revoke a payer’s access with one tap so that I can stop viewing immediately when the review is over."
Description

All shared links require an expiration date/time and automatically become invalid when reached. Provide a single-tap revoke control from the patient dashboard and the share management list that immediately invalidates tokens and any active recipient sessions. Support optional short-lived session tokens (e.g., 15 minutes idle timeout) and enforce time restrictions across CDN and cache layers. Emit revocation events to cut off WebSocket/streaming sessions and future downloads.

Acceptance Criteria
Mandatory Expiration on Share Creation
Given a clinician creates a scoped share via UI or API When they attempt to submit without an expiration date/time Then the request is rejected with a validation error and the share is not created Given a clinician sets an expiration date/time in the past or equal to current server UTC When they submit Then the request is rejected with a validation error and the share is not created Given a clinician sets an expiration date/time in the future When they submit Then the share is created, the persisted record includes the exact expiration timestamp (UTC), and the link token is bound to that timestamp
Auto-Expiry Invalidates Access Across All Layers
Given a shared link whose expiration timestamp has been reached When a recipient attempts to access any protected route, download, or stream using that link Then access is denied immediately (<=1s drift), API responses return 410 Gone, and no new WebSocket/stream connections are permitted Given CDN or intermediate caches previously served content for that link When the expiration time is reached Then subsequent requests through the CDN return 410 Gone and no cached payload is served (token validation prevents stale serves) Given an expired link When attempting to fetch via any previously copied signed URL Then the request fails with 410 Gone and is not served from CDN or browser cache
Patient Dashboard One-Tap Revoke
Given a valid shared link is visible in the patient dashboard When the owner taps the Revoke control once Then the link's token is invalidated server-side, its status updates to Revoked in the UI, and all new requests using that token are denied with 403 Forbidden Given active recipient sessions exist for that token When revocation occurs Then they are terminated within 5 seconds, WebSocket/stream connections are closed, and any in-flight downloads are aborted
Share Management List One-Tap Revoke
Given a valid shared link is visible in the share management list When the owner taps the Revoke control once Then the link's token is invalidated server-side, the item shows Revoked, and all new requests using that token are denied with 403 Forbidden Given the same link appears in multiple views (dashboard and list) When revocation occurs in one view Then the other view reflects the Revoked state within 2 seconds without manual refresh
Idle Timeout for Short-Lived Sessions
Given a recipient is authenticated via a shared link session with idle timeout enabled and set to 15 minutes When no API or WebSocket activity occurs for 15 consecutive minutes Then the session is invalidated and subsequent requests return 401 Unauthorized; existing WebSockets are closed on next heartbeat within 5 seconds Given the recipient performs activity before 15 minutes elapse When they make an API request or exchange a WebSocket message Then the idle timer resets and the session remains valid
Revocation Event Emission and Enforcement
Given a link is revoked by the owner or expires automatically When the revocation is processed Then a revocation event including token identifier, reason (expired|revoked), and timestamp is emitted to the session management subsystem Given the revocation event is emitted When subscribers receive it Then all active sessions for that token are closed and future requests denied within 5 seconds, with success recorded in logs or metrics
Recipient Feedback on Expired or Revoked Link
Given a recipient opens an expired link When the system denies access Then the UI displays a clear non-sensitive message that the link has expired and the API returns 410 Gone with error code link_expired Given a recipient opens a revoked link When the system denies access Then the UI displays a clear non-sensitive message that access was revoked by the owner and the API returns 403 Forbidden with error code link_revoked
Recipient Secure Access & Read-Only Viewer
"As a payer reviewer, I want to open a secure link and see only the relevant progress data so that I can make decisions quickly without extra setup."
Description

Provide a friction-light recipient experience that does not require a MoveMate account: open link, verify via email one-time code or sharer-defined passcode, then land on a responsive, read-only viewer. The viewer clearly displays scope and expiry banner, supports drill-down within allowed programs/dates/metrics, and blocks edits, comments, and uploads. Optional watermarking and disabled CSV export per link settings. Optimize for mobile and desktop, accessible (WCAG 2.1 AA), with performance budgets under 2s TTI on 4G.

Acceptance Criteria
Secure Link Verification (Email OTP or Passcode)
Given a recipient opens a valid Scoped Share link that requires verification When the link is configured for email one-time code Then the system sends a 6-digit code to the entered email within 30 seconds And the code is valid for 10 minutes And the recipient can request resend after 30 seconds, up to 3 resends per hour When the correct code is submitted within validity Then access is granted without requiring a MoveMate account When 5 incorrect codes are submitted within 15 minutes Then further attempts are blocked for 15 minutes with a non-enumerating error message And all error messages do not reveal whether an email is registered with MoveMate When the link is configured for sharer-defined passcode Then access is granted only upon exact passcode match When 5 incorrect passcode attempts occur within 15 minutes Then the link is temporarily locked for that client for 15 minutes with guidance to contact the sharer
Read-Only Viewer and Scope/Expiry Banner
Given a verified recipient lands on the viewer Then no controls for edit, comment, or upload are rendered And write APIs (create/update/delete, comments, uploads) return 403 for the share session And the viewer header displays a non-dismissible banner labeled Read-only And the banner lists the scope summary (program names, date range, metric set) and the exact expiry date/time in the recipient’s local timezone And the banner remains visible on mobile and desktop viewports
Constrained Drill-Down Within Allowed Programs/Dates/Metrics
Given a verified recipient views shared content When navigating via links, filters, search, or deep links Then only items within the allowed programs, dates, and metric set are discoverable and openable And filters and date pickers are constrained to the allowed range And attempts to access items outside the scope show an Access limited by share scope message and do not reveal out-of-scope data And API requests for out-of-scope resources return 403
Export Disabled and Watermarking per Share Settings
Given a share link with CSV export disabled Then any export/download UI is hidden or disabled And calls to export endpoints return 403 Given a share link with watermarking enabled Then a semi-transparent, non-removable overlay watermark stating Shared via MoveMate — [Sharer] — [Link ID or Expiry] is rendered over charts, media, and report surfaces And the watermark persists across scroll, pagination, and zoom, and is present in screenshots and print to PDF Given watermarking is disabled Then no watermark is shown
Viewer Accessibility (WCAG 2.1 AA)
Given the viewer and verification screens are used with keyboard only Then all interactive components are reachable in a logical order with a visible focus indicator And no keyboard traps occur Given a screen reader user navigates the viewer Then landmarks, headings, buttons, links, and form fields have meaningful roles, names, and states and page titles are unique And status, errors, and success messages are announced via aria-live regions Given default color theme Then text and interactive elements meet contrast ratio of at least 4.5:1 (3:1 for large text/icons) Given a 320px wide viewport and 200% zoom Then content reflows without loss of information or functionality and without horizontal scrolling
Performance and Responsive Behavior on 4G
Given a cold load on a mid-range mobile device over 4G (400 ms RTT, 1.6 Mbps down, 750 Kbps up, 4x CPU throttle) Then Time to Interactive is ≤ 2.0 s at p90 across 20 test runs of the default share view And Largest Contentful Paint is ≤ 2.5 s and Total Blocking Time ≤ 200 ms at p90 And total JavaScript payload is ≤ 300 KB compressed and total transfer size is ≤ 1 MB for initial view Given desktop on fast connection Then layout adapts up to 1440 px with no overlapping or truncated UI Given mobile viewport at 320–414 px Then layout adapts with no horizontal scroll and touch targets ≥ 44x44 px
Auto-Expiry and One-Tap Revocation Enforcement
Given a share link has an expiry timestamp When the current time reaches the expiry Then new requests are denied with an Expired link page and HTTP 410, and active viewer sessions for that link are invalidated within 60 seconds And the banner countdown or expiry time matches the enforcement time to within 1 minute Given the sharer revokes the link Then access is revoked within 60 seconds globally; subsequent API/WebSocket calls return 403 and the viewer shows Link revoked And revoked links cannot be reactivated or re-verified
Share Management Console & Templates
"As a clinic admin, I want to manage and templatize shared links so that our staff can grant consistent, policy-compliant access efficiently."
Description

Add an admin view listing all active, scheduled, and expired shares per patient and organization with filters, search, and sorting. Show key metadata (recipient email/label, scope summary, expiry, last access). Enable actions: revoke, extend expiry, clone, edit scope (where safe), and export CSV of the list. Provide reusable templates (e.g., “30-day adherence for payer”) to speed creation and enforce org policy defaults (expiry, verification method, export permissions).

Acceptance Criteria
List, Filter, Search, and Sort Shares
Given there are at least 5,000 shares spanning Active, Scheduled, and Expired across multiple patients and organizations When the admin opens the Share Management Console Then the default view loads in under 2 seconds and displays shares with pagination applied When the admin applies filters (Status=Active, Patient="Jane Smith", Organization="Acme PT", Date Range=Last 30 days) Then only shares matching all selected filters are shown and the result count updates accordingly And clearing filters resets the list to the default view When the admin searches by recipient email or label with a partial, case-insensitive query (e.g., "payer@") Then only shares whose recipient email or label contains the query are returned When the admin sorts by Expiry ascending or Last Access descending Then the list is sorted accurately and deterministically across all pages And the applied sort and filters persist during pagination and when revisiting the console within the same session When no records match the current query Then an empty-state message is displayed with a "Clear filters" action
Share Metadata Visibility
Given the admin is viewing the share list Then each row displays: Recipient (label or email), Scope Summary, Expiry (date and time with timezone), Last Access (date and time or "Never"), and Status (Active, Scheduled, Expired, Revoked) And timestamps appear in the organization’s configured timezone and an ISO 8601 value is shown in a tooltip on hover or focus And truncated values reveal full content on hover or focus When a recipient successfully accesses a share link Then Last Access updates within 60 seconds and reflects the most recent access
Revoke Share
Given an Active or Scheduled share is selected When the admin clicks Revoke and confirms in a modal Then the share’s status changes to Revoked within 3 seconds and the link becomes unusable (HTTP 410 or equivalent error) And the action is logged with actor, timestamp, and optional reason And the row moves to the Expired/Revoked view and the Revoke action is no longer available
Extend Share Expiry
Given an Active or Scheduled share with an expiry date/time When the admin selects Extend Expiry and chooses a new future date/time that does not exceed the organization’s maximum share duration Then the new expiry is saved and displayed immediately and the link honors the new expiry And attempts to set a past date/time or exceed the maximum show a validation error and prevent save And an audit log entry records the previous and new expiry values
Edit Share Scope (Safe Edit)
Given an Active share with a defined scope (programs, metrics, date range) When the admin opens Edit Scope Then the UI allows narrowing scope (e.g., reduce date range, remove programs or metrics) and disallows broadening scope beyond the original (e.g., adding programs, widening date range) with disabled controls and explanatory tooltips And upon save, the updated scope is persisted and enforced on subsequent link access within 60 seconds And an audit log entry records the exact scope changes And if the edit would broaden scope, Save is disabled
Export CSV of Share List
Given the share list has an applied filter and sort When the admin clicks Export CSV Then a CSV file downloads within 5 seconds containing only the currently filtered result set (up to 10,000 rows) And the CSV includes a header row and columns: Recipient, Scope Summary, Status, Expiry (ISO 8601 UTC), Last Access (ISO 8601 UTC), Created By, Verification Method And the filename is {org-slug}_shares_{UTC-timestamp}.csv And if results exceed 10,000, the export contains the first 10,000 rows and the UI displays a notice to refine filters
Share Templates: Org Defaults and Fast Creation
Given an org admin has created a template named "30-day adherence for payer" with defaults (Expiry=30 days, Verification Method=Email OTP, Export Permission=Allowed) aligned to org policy When a clinician creates a share using this template Then the create form is pre-populated with the template defaults and only requires recipient and patient selection to proceed And the defaults cannot be relaxed below org policy (e.g., verification strength) nor extended beyond the org’s maximum expiry; validation prevents noncompliant values And the created share records the template identifier in the audit log And changing the template later affects only future shares; existing shares remain unchanged
Audit Logging & Compliance Export
"As a privacy officer, I want complete logs of what was shared and when so that we can demonstrate compliance during audits."
Description

Record immutable audit events for share lifecycle (create, update, revoke, auto-expire) and recipient activity (first open, subsequent views, exports, IP, user agent) tied to link ID and patient. Provide exportable logs (CSV/JSON) and a printable compliance summary that describes what was shared, to whom, when, and for how long. Support retention policies and secure storage with tamper-evident hashing. Surface audit summaries in the patient privacy tab.

Acceptance Criteria
Lifecycle Events Logged for Scoped Share Links
Given a clinician creates a Scoped Share link for a patient When the link is created Then an immutable audit event is recorded with fields: event_id, link_id, patient_id, actor_id, actor_role, event_type=create, timestamp (UTC ISO 8601), details, hash, prev_hash And the event is persisted within 2 seconds of action And the event is retrievable via audit API and visible in the patient's Privacy tab Given an existing Scoped Share link is updated (scope, date range, metrics) When the update is saved Then an audit event with event_type=update containing changed fields with previous and new values is recorded and chained to the prior event via prev_hash Given a Scoped Share link is revoked or auto-expires When the status changes to revoked or expires Then a corresponding audit event (event_type=revoke or auto_expire) is recorded and chained, including the reason (if provided)
Recipient Activity Captured Per Access
Given a recipient opens a Scoped Share link for the first time When the content loads successfully Then an audit event with event_type=first_open is recorded with link_id, patient_id, timestamp (UTC ISO 8601), ip, and user_agent, chained via prev_hash Given the same recipient views the link again in a new session When the content loads Then an audit event with event_type=view is recorded with timestamp, ip, and user_agent Given the recipient exports data from the link When an export action is performed Then an audit event with event_type=export is recorded including export_format (csv or json) and record_count And duplicate events for a single page load are prevented via an idempotency key
Exportable Audit Logs (CSV and JSON)
Given a user with audit-export permission opens the audit export for a patient or link When a date range and format (CSV or JSON) are selected and export is requested Then the system generates a file within 10 seconds containing all matching events And CSV exports include a header row and columns: event_id, link_id, patient_id, event_type, actor_role, actor_id, timestamp_utc, ip, user_agent, details, export_format, hash, prev_hash And JSON exports are an array of objects with the same keys in snake_case And records are sorted ascending by timestamp_utc and are complete with no gaps in prev_hash chain for the selected range And a SHA-256 checksum of the exported file is provided for verification
Printable Compliance Summary of Scoped Share
Given a user views the compliance summary for a specific Scoped Share link When the summary is generated Then it displays: patient identifier, link_id, recipient identifier (email or phone), what was shared (program/scope and metrics), created_at (UTC), expires_at (UTC), revoked_at (if any), total opens, last access timestamp, and number of exports And the summary includes the current chain tip hash for tamper-evidence And the layout provides a printer-friendly view and downloadable PDF that renders correctly on A4 and Letter And recipient identifiers are masked (e.g., first and last character visible) unless the user has elevated permissions
Tamper-Evident Hash Chain and Secure Storage
Given any audit event is written When the event is persisted Then the event payload (excluding hash) is hashed with SHA-256 and includes prev_hash to form a forward-linked chain And a verification operation can recompute hashes for a selected range and returns integrity=true when no alteration is detected And if any event is altered or missing in the range, verification returns integrity=false and the index of the first failing event And audit data is encrypted at rest, append-only at the application layer (no updates/deletes), and restricted to service roles; corrections are recorded as compensating events
Retention Policy Enforcement and Legal Hold
Given a tenant-level retention period is configured (e.g., 6 years) When an event exceeds the retention period and no legal hold is active Then the event is purged within 24 hours by a scheduled process And a purge-summary record (time window, event count, hash of purged batch) is appended to an immutable purge log And admins can preview upcoming purges at least 7 days in advance and export affected records before purge Given a legal hold is enabled for a patient or link When the retention period is exceeded Then no purge occurs until the hold is released, and this is reflected in the purge preview
Audit Summaries Visible in Patient Privacy Tab
Given a staff user with privacy permissions opens a patient's Privacy tab When the tab loads Then an audit summary card for Scoped Share is displayed showing: number of active shares, last lifecycle action (type and timestamp), last recipient access (type and timestamp), and a link to View all audit events And the summary loads within 2 seconds on a 10 Mbps connection And unauthorized users receive a 403 and no audit data is rendered on the client And selecting View all navigates to the full audit list filtered to the patient
Notifications & Expiry Reminders
"As a therapist, I want to be notified when a payer views the data and when access is about to expire so that I can follow up or extend access if needed."
Description

Send configurable notifications to the sharer when a recipient first accesses a link, approaches expiry (e.g., 24 hours before), or encounters a failed verification attempt. Allow optional recipient reminders before expiry with opt-in at share creation. Deliver via in-app, email, and push (where available), with quiet hours and digest options. Notifications link back to the share management console for action.

Acceptance Criteria
Sharer Notified on First Recipient Access
Given an active scoped share is created and delivered to a recipient And the sharer has notifications enabled for "First Access" When the recipient successfully passes verification and first accesses the link Then the sharer receives exactly one notification per share indicating first access And the notification is delivered via all enabled channels (in-app, push, email) with deduplication And the message includes the share name, recipient identifier (if provided), access timestamp, and a deep link to the share management console And subsequent accesses by any verified user for that share do not trigger additional "First Access" notifications
Sharer Expiry Reminder 24 Hours Before End
Given an active scoped share with an expiry timestamp at least 24 hours after creation And the sharer has expiry reminders enabled When the share is within 24 hours of its expiry time in the sharer’s timezone Then a reminder notification is scheduled and sent once at T minus 24 hours And if the share is extended, revoked, or expires before the scheduled send, the reminder is canceled or rescheduled accordingly And the notification includes the exact expiry time, a deep link to extend/revoke, and current access status And shares with a total duration under 24 hours trigger a reminder at T minus 1 hour instead
Recipient Pre-Expiry Reminder (Opt-In at Share Creation)
Given the sharer opts in to send recipient reminders before expiry during share creation And the recipient has at least one delivery channel available (email or push) When the share reaches T minus 24 hours to expiry Then the recipient receives a reminder that the link will expire, including the expiry timestamp and link to access And if the recipient unsubscribes or opts out, no further reminders are sent And if the share is revoked or extended prior to sending, the reminder is canceled or updated accordingly
Failed Verification Attempt Alert to Sharer
Given a scoped share requires recipient verification And the sharer has failure alerts enabled When an access attempt fails verification (e.g., mismatched DOB or invalid code) Then the sharer receives a notification within 1 minute containing masked attempt details (attempt count, approximate location, timestamp) and a deep link to revoke the share And multiple failures within 15 minutes are rate-limited to one alert with a count summary And no personally identifiable recipient data beyond masked email/phone is included
Quiet Hours and Digest Delivery
Given the sharer has configured quiet hours and a digest delivery window When a notification would be generated during quiet hours Then push and email are suppressed and the event is added to the next digest And an in-app inbox entry is created immediately without push And the digest is delivered at the configured time window with a count summary and per-event deep links
Channel Preferences and Fallbacks
Given the sharer has set channel preferences (in-app, email, push) And the sharer’s device/platform supports push When a notification event occurs Then the system delivers via all enabled channels with consistent content And if a channel is unavailable (e.g., invalid push token or bounced email), the system retries and falls back to available channels and logs the failure And duplicate notifications are prevented across channels And all notifications include a deep link to the specific share management console view

PHI Redactor

Smart redaction rules that automatically hide sensitive fields (DOB, addresses, clinician notes) when sharing form flags or adherence summaries. Maintain privacy while still giving coaches and payers the data they require to help or authorize care.

Requirements

Role-based Redaction Profiles
"As a clinician, I want to share adherence summaries with a coach without exposing PHI so that my patient’s privacy is preserved and I remain compliant."
Description

Configurable sharing profiles that automatically apply the minimum necessary redaction based on recipient role (e.g., Coach, Payer, Patient, Clinic Admin), purpose of use, and jurisdictional policy (HIPAA, GDPR). The profile engine masks or removes sensitive data (DOB, street address, full name, contact info, clinician free-text notes, raw video, GPS) when generating shareable adherence summaries, form-flag snapshots, dashboards, PDFs, CSVs, and API payloads. Integrates with MoveMate’s sharing flows (link sharing, email, payer portals, FHIR/webhooks) and defaults to the most restrictive profile when ambiguity exists. Supports per-organization defaults, payer contract presets, and automatic aggregation of metrics (e.g., rep totals, adherence rates, form-error counts) to preserve utility without exposing identifiers.

Acceptance Criteria
Coach Link Share - Adherence Summary Redaction
Given a patient record contains DOB, street address, full name, email, phone, clinician free-text notes, raw video, and GPS And the sender selects recipient role Coach with purpose-of-use Care Coordination under HIPAA (US) When a shareable adherence summary link and PDF are generated Then the outputs include only exercise names, date ranges, rep totals, adherence rates, and form-error counts with a pseudonymous patient alias (e.g., Patient #1234) And the outputs do not include DOB, street address, full name, email, phone, clinician free-text notes, GPS, or raw video And any form-flag snapshot removes embedded video and excludes location metadata And the applied profile identifier is coach_min_hipaa
Payer Portal Export - Contract Preset with GDPR Overlay
Given an organization has a payer contract preset configured (payer_x_min) And the recipient jurisdiction is EU (GDPR applies) When exporting adherence data to the payer portal as CSV and PDF Then the applied profile equals payer_x_min with GDPR overlay And only allowlisted fields appear: encounter/case ID, pseudonymous patient ID, date ranges, aggregate metrics (rep totals, adherence rates, form-error counts) And DOB is generalized to year-only if permitted by the preset; otherwise excluded And full name, contact info, street address, clinician free-text notes, GPS, and raw video are absent And CSV and PDF contain identical visible field sets as defined by the profile
Patient Share - Preserve Identity, Hide Clinician Notes
Given the recipient role is Patient with purpose-of-use Self-Care under HIPAA (US) When sending a shared adherence summary via email link and PDF attachment Then the patient’s own identifiers (full name, DOB) may be visible And clinician free-text notes are fully redacted And raw video is accessible only after authenticated sign-in; it is not embedded in the PDF or public link preview And GPS data is excluded from all artifacts And the applied profile identifier is patient_selfcare_default
Ambiguous Recipient or Purpose - Most Restrictive Default
Given a share is initiated without a recipient role or purpose-of-use or with conflicting jurisdiction When generating web link, PDF, CSV, and API artifacts Then the system applies the most_restrictive profile by default And outputs include only aggregate metrics (rep totals, adherence rates, form-error counts) and non-identifying exercise labels And no DOB, street address, full name, contact info, clinician free-text notes, raw video, or GPS appear in any artifact And the artifact metadata records profile_id most_restrictive
API Payload Redaction - FHIR/Webhooks Compliance
Given a webhook subscription and a FHIR endpoint are configured for a recipient When adherence summaries and form-flag snapshots are delivered via webhook JSON and FHIR resources Then restricted fields are omitted or masked according to the active profile’s allowlist/denylist And each payload includes redaction_profile_id, policy_basis (e.g., HIPAA, GDPR), purpose_of_use, and jurisdiction in metadata And sample payload validation confirms the absence of DOB, street address, full name, contact info, clinician free-text notes, raw video references, and GPS when prohibited And HTTP logs show the applied profile identifier for the delivery
Per-Organization Defaults and Payer Preset Mapping
Given an organization admin configures default profiles per recipient role and maps payer domains to contract presets When a staff user initiates a share without manually selecting a profile Then the organization’s default for the detected role auto-applies And if the recipient domain matches a mapped payer, the mapped contract preset overrides the role default And the selected profile is displayed in the share confirmation And generated artifacts reflect the selected profile consistently across formats
Cross-Format Redaction Consistency and Metadata Hygiene
Given a specific redaction profile is selected When generating adherence summaries, form-flag snapshots, dashboards, PDFs, CSVs, and API payloads Then the visible field set is consistent across all formats for that profile And no PHI appears in filenames, link URLs, PDF properties, CSV headers, image EXIF, or other metadata And form-flag images include no GPS or face-identifying features And spot checks across three representative patients show zero occurrences of blocked fields
Free-text PHI Detection and Redaction
"As a clinician, I want my notes automatically scrubbed of identifiers before sharing so that I don’t accidentally disclose PHI."
Description

Automated scrubbing of unstructured clinician notes and comments using a hybrid approach (regex patterns + ML/NLP entity recognition) to detect and redact identifiers (names, DOB, addresses, phone, email, MRN, insurance IDs, facility names) while preserving clinical meaning. Provides confidence thresholds, inline redaction tokens (e.g., [REDACTED-DOB]), and a review queue for low-confidence cases. Processes text in real time during share, with options for server-side only processing to avoid exposing PHI to third parties. Maintains original, unshared records securely; only redacted versions leave the boundary. Supports English at launch with locale-aware patterns and extensible dictionaries.

Acceptance Criteria
Real-time share redacts PHI entities with target accuracy and latency
Given a clinician note containing at least one instance of each PHI entity type (name, DOB, address, phone, email, MRN, insurance ID, facility name) and English locale When the note is shared with the PHI Redactor enabled Then every detected PHI instance is replaced inline with the correct token label ([REDACTED-NAME], [REDACTED-DOB], [REDACTED-ADDRESS], [REDACTED-PHONE], [REDACTED-EMAIL], [REDACTED-MRN], [REDACTED-INSURANCE-ID], [REDACTED-FACILITY]) And no unredacted PHI string from the input appears in the outbound payload, UI preview, or share artifact And on the reference evaluation set of 1,000 annotated English notes, micro-averaged recall ≥ 0.90 and precision ≥ 0.95 across the specified entity set And on the same workload, 95th-percentile processing latency for notes up to 5,000 characters is ≤ 400 ms under 50 concurrent share requests And a curated list of clinical terms (e.g., exercise names, body parts, conditions) is not redacted; false positive rate on this list ≤ 5%
Inline token formatting preserves readability and structure
Given text containing PHI adjacent to punctuation, parentheses, and line breaks When redaction is applied Then tokens are all-caps, enclosed in square brackets, and use hyphenated entity labels exactly as specified And surrounding punctuation, spacing, and line breaks are preserved (no added or lost characters other than the replaced span) And multiple adjacent entities are replaced by separate tokens without merging And tokenization is idempotent (re-running redaction on already redacted text makes no further changes) And the character count change equals the sum of (replaced span length − token length) across all redactions
Configurable confidence thresholds and review queue
Given a global auto-redact confidence threshold defaulting to 0.85 and configurable per tenant between 0.50 and 0.99 When the model detects an entity with confidence ≥ threshold Then it is auto-redacted and no review task is created for that entity When the model detects an entity with confidence < threshold and ≥ 0.50 Then it is redacted and a review task is created capturing entity type, detected span, confidence, and 20 characters of left/right context When the model detects an entity with confidence < 0.50 Then it is redacted and a review task is created and the share artifact is tagged "Contains low-confidence redactions" And reviewer actions (approve redaction, restore text, change label) are audit-logged with timestamp, user ID, before/after values, and reason
Server-side only PHI processing prevents third-party exposure
Given tenant setting server_side_only=true When redaction runs during share Then no requests containing original or redacted text are made to domains outside the approved allowlist (*.movemate.internal) And only first-party models/rules are invoked, verified by egress logs showing zero calls to third-party ML/NLP services during processing And any attempt to call a disallowed endpoint is blocked and logged with severity=ERROR and correlation ID And the share completes successfully using server-side components only
Only redacted content leaves boundary; originals retained securely
Given a share request containing clinician notes with PHI When the share completes Then the outbound payload, previews, and persisted share artifacts contain only redacted text (no raw PHI) And the original unredacted note remains in the source record encrypted at rest and accessible only to roles with permission notes.view_original And audit logs capture source record ID, redaction version, and a checksum of the redacted output for traceability And an outbound DLP scan on the payload finds zero matches for PHI patterns
English locale-aware patterns for dates, phones, and addresses
Given locale=en-US When redaction runs on text containing "DOB 01/24/1990, phone (415) 555-0199, ZIP 94107" Then DOB, phone, and ZIP are redacted with the correct tokens and non-PHI text remains unchanged Given locale=en-GB When redaction runs on text containing "DOB 24/01/1990, phone 020 7946 0958, postcode SW1A 1AA" Then DOB, phone, and postcode are redacted with the correct tokens and non-PHI text remains unchanged And switching locales changes the pattern matching accordingly while maintaining the same token schema
Extensible dictionaries for facility/insurer names with whitelist control
Given an admin uploads a CSV via API containing custom dictionary entries (entity_type, term) and whitelist terms When the upload is accepted Then new dictionary terms are available to the redactor within 5 minutes without service restart And dictionary matches are case-insensitive and whole-word by default, with optional per-term partial-match flags honored And whitelist terms are not redacted even if overlapping with dictionary entries And deleting a term takes effect within 5 minutes and is recorded in the audit log with actor and timestamp
Structured Field Redaction for Exports and APIs
"As a payer reviewer, I want to receive adherence metrics without patient identifiers so that I can authorize care based on evidence while remaining compliant."
Description

Deterministic masking of PHI in structured data models and generated artifacts across the platform, including patient profile fields, exercise session metadata, form-error events, and adherence timelines. Implements a schema-aware redaction layer for all export formats (PDF, CSV, JSON) and integrations (FHIR resources such as Patient and Observation, payer webhooks). Supports hashing or tokenizing patient identifiers, generalizing dates (e.g., month/year only), coarsening locations (city/state only), and aggregating per-period metrics. Ensures consistent policy application across UI previews and downstream data pipelines.

Acceptance Criteria
Deterministic Tokenization of Patient Identifiers
Given a patient with identifier fields (internalId, MRN, phone, email) When data is exported (PDF, CSV, JSON) or sent via API (FHIR resources, payer webhooks) Then each identifier is replaced with a deterministic, irreversible token consistent across all outputs in the same environment And Given the same input identifier When tokenization is executed via UI preview, batch export, and webhook delivery Then the produced token is identical across all pathways And Given two different identifiers When tokenized Then the resulting tokens are distinct (0 collisions across a test set of at least 10,000 identifiers) And Given different environments (e.g., staging vs production) When tokenization is performed Then the same input identifier yields different tokens across environments
Date Generalization to Month/Year
Given datetime fields in patient profiles, exercise session metadata, form-error events, and adherence timelines When exported or shared via API/webhook Then values are generalized to month/year precision (YYYY-MM) with day and time removed And Given FHIR resources requiring date or dateTime When generalized Then the serialized values conform to FHIR partial date/dateTime rules (e.g., YYYY-MM) and pass schema validation And Given timezone or offset components When generalized Then they are omitted from outputs And Given UI previews of exports When compared to the final artifacts Then the generalized dates match exactly
Location Coarsening to City/State Only
Given address and location fields in patient profiles, session metadata, and events When data is exported or shared Then only city and state/province are retained; street lines, unit, full postal/ZIP code, and geocoordinates are omitted And Given FHIR Patient.address or Observation.location elements When coarsened Then resulting resources remain valid and pass FHIR validation And Given UI previews When compared to generated artifacts Then coarsened location values match exactly
Aggregated Per-Period Adherence Metrics
Given adherence timelines and session events When exporting metrics for external consumers Then only per-period aggregates (daily/weekly/monthly as configured) are included; raw event timestamps and per-rep details are excluded And Given an aggregation period is selected When totals are computed Then counts and durations equal the sum of underlying events for that period (tolerance = 0) And Given FHIR Observation summaries are produced When representing period totals Then values are encoded using appropriate Observation.value[x] types and the effective date is generalized to the period granularity
Cross-Format Policy Consistency and Coverage
Given a redaction policy version V When applied to patient profiles, exercise session metadata, form-error events, and adherence timelines Then identical redaction outcomes occur across PDF, CSV, JSON, FHIR resources, and payer webhooks And Given the same source record rendered in UI preview and delivered via export/API When compared field-by-field Then masked values and field presence/absence are identical And Given the policy is updated from V to V+1 When exports are regenerated Then outputs reflect new rules and include the applied policy version in export metadata
FHIR Resource Redaction Validity
Given FHIR Patient and Observation resources produced by the platform When redaction rules are applied Then the resulting resources validate against the targeted FHIR version and pass integration-specific validation And Given required-but-sensitive elements (e.g., Patient.name, birthDate) When redacted Then permitted FHIR patterns are used (e.g., removal of elements or use of data-absent-reason extensions) while maintaining overall validity And Given payer webhook deliveries expecting FHIR payloads When validated Then no PHI outside the allowlist is present and all payloads pass schema checks
Fail-Closed Handling of Unknown PHI Fields
Given new or unrecognized schema fields classified as PHI When encountered during export or API delivery Then the system redacts these fields by default and records a machine-readable redaction event And Given a missing redaction mapping or schema version mismatch When detected Then the job fails closed or replaces values with a redacted placeholder without emitting raw data, and returns a clear error code And Given downstream pipelines consume redacted datasets When inspected Then no out-of-policy PHI fields are serialized beyond the defined allowlist
Redaction Preview and Just-in-time Overrides
"As a clinic admin, I want to preview and selectively unredact fields when permitted so that I can satisfy specific payer requests without oversharing."
Description

An interactive preview that shows before/after redaction for any artifact to be shared, highlighting removed or masked fields and entities. Authorized users can request per-share unredaction of specific fields with reason codes, two-factor verification, time-bound scope, and automatic re-redaction for subsequent shares. The workflow enforces policy constraints (e.g., payer exceptions) and logs all overrides. Includes guardrails such as “block send” when overrides violate org or regulatory rules and provides safe alternatives (aggregated or anonymized replacements).

Acceptance Criteria
Before/After Redaction Preview Rendering
Given an artifact with PHI and non-PHI fields When an authorized user opens the Share Preview Then the UI displays side-by-side Before and After views within 2 seconds p95 And each redacted field/entity is highlighted in the After view with a tooltip showing rule ID and category And a diff summary shows counts by field type (e.g., identifiers, notes) And the preview includes ARIA labels for redacted items for screen readers And an audit event "preview_viewed" is recorded with user ID, artifact ID, and timestamp
Per-Share Unredaction Request with 2FA and Reason Codes
Given the user has override permission and 2FA configured When the user selects a masked field and requests unredaction Then the user must select a reason code from a controlled list And must provide a justification of at least 20 characters And must complete 2FA within 90 seconds And upon success the field becomes visible only in the current share draft And the override is time-bound (default 24h, cannot exceed the policy max) And subsequent shares after expiry re-apply redaction automatically
Policy Enforcement and Block Send on Violations
Given an override conflicts with organizational or regulatory policy When the user attempts to send the share Then the send action is blocked before transmission And the UI shows the specific violated rule and policy reference And safe alternatives are presented: "Send Aggregated Summary" and "Send De-identified" And no PHI beyond allowed policy leaves the system And an audit event "send_blocked" is recorded with details of the violation
Payer Exception Handling for Allowed Fields
Given a payer exception policy allows specific fields for a defined purpose When the user selects that payer and purpose and requests those fields Then the system auto-approves only the allowed fields without admin intervention And marks the share with the applied exception policy ID And includes the allowed fields while preserving all other redactions And records the exception application in the audit trail
Comprehensive Override Logging and Traceability
Given any override lifecycle event (requested, approved, denied, expired, revoked, sent) When the event occurs Then the system logs timestamp, user ID, artifact ID, field identifiers, reason code, policy ID, 2FA outcome, IP address, and share ID And logs are immutable and retained per retention policy And logs are available in an admin view with filters (date range, user, field, outcome) And logs can be exported as CSV and JSON And new log entries appear within 5 seconds of the event p95
Accessibility, Performance, and Error Handling in Preview
Given an artifact up to 50 fields and 10 media items When loading the preview Then time-to-interactive is ≤ 2 seconds p95 on a 10 Mbps connection And keyboard-only navigation operates all controls and tooltips And color contrast meets WCAG 2.1 AA When the redaction engine errors Then the preview shows a masked fallback with an actionable error code and a retry option And no unredacted PHI is exposed
Safe Alternatives Substitution for Blocked Data
Given a required field is blocked by policy When the user chooses "Send De-identified" or "Send Aggregated Summary" Then the system substitutes values per approved mappings (e.g., age band for DOB, city for full address, sentiment/score for full note) And the confirmation modal lists each substitution applied And the receiving payload passes schema validation And the audit trail records the substitutions used
Audit Trails and Compliance Reporting
"As a compliance officer, I want detailed redaction logs and exportable reports so that I can demonstrate regulatory compliance and investigate incidents."
Description

Immutable, searchable logs of every redaction decision and share event, capturing who shared what, with which profile, which fields/entities were redacted or unredacted, timestamps, recipient, purpose-of-use, and IP/device metadata. Provides exportable reports (CSV/PDF) for audits, breach investigations, and payer attestations; includes retention policies and access controls. Supports rule effectiveness dashboards (e.g., top redacted entities, override rates) and alerts for anomalous behavior (frequent overrides, attempts to share raw video). Integrates with SIEM via webhook or syslog.

Acceptance Criteria
Event Logging Completeness and Field Coverage
Given a user shares a form flag or adherence summary, when the redaction engine evaluates and applies rules, then a single audit entry is appended capturing: event_type, actor_user_id, actor_role, patient_profile_id, artifact_type, artifact_id, redacted_entities [field_name/entity_id, rule_id], unredacted_entities [field_name/entity_id, rule_id or override], override_reason (nullable), decision_engine_version, recipient (internal id or external identity), purpose_of_use, timestamp (ISO 8601 UTC), ip_address, user_agent, device_id, app_version. Then the audit entry is available via the Audit API and UI within 2 seconds p95 of the action. Then raw PHI values are not stored; only field names/entity identifiers and rule ids are present. Then 0% of share/redaction actions are missing from the log across a 10,000-action test dataset. Given an attempt to share raw video, when the action is blocked, then an audit entry with event_type=attempted_raw_video_share is recorded with the same metadata fields.
Log Immutability and Tamper Evidence
Given any existing audit entry, when a user attempts to update or delete it via UI or API, then the system returns 403 and logs the attempt. Then audit storage is append-only with cryptographic hash chaining (SHA-256) across entries; a daily verification job validates the chain and records a verification status artifact. When an entry is removed due to retention expiry, then a tombstone record is appended referencing original_id, purge_reason, and purge_timestamp; the hash chain remains valid and verifiable. Then an API endpoint exposes the latest verification status and chain tip hash; verification completes within 15 minutes of its scheduled run. Then any tamper or verification failure triggers an alert and creates a high-severity audit event.
Search, Filter, and Query Performance
Given an auditor role, when filtering by time range, actor_user_id, actor_role, patient_profile_id, event_type, rule_id, recipient, purpose_of_use, ip_address, device_id, and override_reason, then results include only matching entries. Then queries return within 2 seconds p95 for result sets up to 50,000 rows; pagination supports limit up to 1000 with next_cursor; total_count is returned (exact or estimated with <=5% error when fast_count=true). When sorting by timestamp (asc/desc) or actor_user_id, then sort order is correct and stable across pages. When no entries match, then the API returns 200 with an empty result set and correct total_count=0.
Compliance Reports and Rule Effectiveness Dashboards
Given a compliance officer selects report type (audit, breach investigation, or payer attestation) and a filter set, when generating a report, then the system produces CSV and PDF files containing the filtered dataset with columns defined by the audit schema and a cover/header with generated_by, generated_at (UTC), filter_summary, and row_count. Then the PDF is digitally signed and watermarked "Confidential"; SHA-256 checksums for CSV and PDF are generated and stored; the export event is logged with requester, file hashes, and row_count. Then for datasets up to 100,000 rows, CSV generation completes within 60 seconds p95; PDF generation for summary views completes within 60 seconds p95 for up to 10,000 summarized rows. Given the rule effectiveness dashboard, when a user selects a time range, then widgets display top 10 redacted fields/entities, override rate, overrides by user/role, rule hit counts, and trend lines; values match recomputed aggregates from the raw log within 0.5% for aggregates >100 and exactly for counts <=100; dashboards reflect new events within 30 seconds.
Retention Policy and Legal Hold Enforcement
Given a tenant default retention of 6 years, when an admin updates retention to a value between 1 and 10 years, then the change is versioned, audited, and applied to subsequent purge runs. When entries exceed their retention period and are not under legal hold, then they are purged within 24 hours; a tombstone log entry records purge_id, original_id, purge_reason, and purge_timestamp. Given a legal hold applied to a patient_profile_id, user_id, or time range, when the purge job runs, then matching entries are not deleted; the hold creation/modification/removal is auditable with who/why/when. Then backups/replicas honor retention and legal holds; spot checks after purge show 0 purged entries remaining in primary indexes for a sampled 100-entry set.
Access Controls and Audit-Log Access Auditing
Given RBAC is configured, when a user without the Compliance Officer or Security Analyst role attempts to access audit logs or dashboards, then the system returns 403 and logs the attempt with actor, ip_address, and target resource. When an authorized user accesses audit data, then MFA is required if not satisfied within the last 12 hours; access scope is limited to the user's tenant and excludes other tenants' data. Then every read or export of audit data emits an access log entry with actor, scope, method (UI/API), item_count/byte_count, and timestamp; exports are rate-limited to 2 per minute per user, and excess attempts are throttled and logged.
Anomaly Alerts and SIEM Integration Delivery
Given anomaly rules configured as: >5 overrides by the same user within 60 minutes OR any attempt to share raw video OR >20 share events per minute from one IP, when a condition is met, then an alert is generated within 60 seconds p95 containing actor, rule_id, count, first_seen, last_seen, and sample event_ids. Then alerts are delivered to in-app, email, webhook, and syslog channels; deduplication suppresses identical alerts within a 10-minute window; throttling caps to 1 alert/user/minute and 100 alerts/tenant/hour. Webhook payloads are JSON with HMAC-SHA256 signature; failures retry with exponential backoff up to 5 attempts and are queued durably for 24 hours; syslog output conforms to RFC 5424 over TLS and includes structured data for tenant_id and event_id. SIEM delivery is at-least-once: 2xx response is treated as ack, non-2xx retries until max attempts; each alert/event includes an idempotency_key to prevent duplicate processing downstream; delivery outcomes are logged and visible in the admin UI.
Rule Management and Versioning
"As a clinic admin, I want to manage and test redaction rule versions so that I can adapt to evolving payer and regulatory requirements without breaking sharing."
Description

An admin console to author, test, approve, and version redaction rules and profiles. Supports a sandbox with sample payloads (adherence summaries, form flags, session notes) to validate outcomes before publish, side-by-side diff of rule versions, staged rollouts (blue/green), and instant rollback. Includes change approval workflows, migration notes, and automated unit tests for critical patterns (e.g., DOB detection) to prevent regressions. Provides API for programmatic updates and per-organization overrides inherited from a global baseline.

Acceptance Criteria
Admin authors and tests redaction rules in sandbox with sample payloads
- Given an Admin-Editor and sample payloads (adherence summaries, form flags, session notes), when a draft redaction rule is created/edited and "Run Preview" is clicked, then the redacted output renders for each payload type with a per-payload count of masked fields and a list of applied rules. - Given a rule contains syntax or reference errors, when validation runs, then the editor highlights the exact error location, displays a descriptive message, and blocks "Save Draft". - Given validation passes, when "Save Draft" is clicked, then a new draft version (N+1) is saved with timestamp, author, and change notes. - Given preview runs on payloads ≤ 200 KB each (max 10 payloads), when executed, then all previews complete within 3 seconds at the 95th percentile.
Side-by-side diff of rule versions with outcome deltas
- Given an existing baseline and a draft version, when "View Diff" is selected, then added, modified, and removed rules are displayed with line-level highlights and pattern-level diffs. - Given the bundled sample payloads, when "Compute Outcome Delta" is run, then the UI shows for each payload type the count of redacted fields added/removed/unchanged between versions. - When "Export Diff" is clicked, then a JSON and PDF artifact are generated containing metadata (version IDs, author, timestamps) and the diff summary.
Approval workflow and publish gating
- Given a draft version, when it is submitted for review, then only users with Approver role (excluding the author) can approve; a minimum of 2 distinct approvers is required. - Given the draft has not passed all mandatory checks (automated tests, migration notes present), when approvers attempt to approve or publish, then the action is blocked with a reason list. - When approval is successful, then the version status becomes Approved, an audit log entry is recorded (who, when, what), and the "Publish" action becomes enabled.
Blue/Green staged rollout with org targeting and metrics
- Given an Approved version (Green) and a Current version (Blue), when a rollout plan is configured, then targeting can be set by percentage in 5% increments and/or by explicit organization IDs. - When the rollout percentage is changed, then traffic routing updates within 2 minutes and the dashboard displays the live split and per-org allocation. - During rollout, the system records and displays metrics per version: redaction error rate, average masked fields per payload, and anomaly alerts when deltas exceed ±20% vs baseline.
Instant rollback to prior stable version
- Given a Green rollout in progress or completed, when "Rollback" is triggered to the last stable version, then 100% of traffic returns to the target version within 60 seconds and the Green version stops receiving requests. - After rollback, an audit log entry is created with initiator, timestamp, source and target versions, and reason note; the UI displays a success confirmation. - Rollback preserves all rule versions and per-organization override configurations unchanged.
Automated unit tests for critical PHI patterns block regressions
- Given a draft or Approved version, when the automated test suite runs, then all critical tests (DOB detection across at least 8 formats, street address masking, clinician note redaction of names) must pass with 100% success or publishing is blocked. - Test execution is triggered automatically on Save Draft, Submit for Review, Approve, and Publish, and completes within 2 minutes; results are viewable in UI and via API with pass/fail per test and total runtime. - When new tests are added to the critical suite, then they are versioned with the ruleset and included in gating checks.
Programmatic updates and per-organization overrides with inheritance
- Given valid OAuth2 credentials, when clients call the Rules API to create/update a profile, then the API enforces optimistic concurrency via ETag; stale updates return HTTP 409 with the latest ETag. - Given a global baseline profile and an organization-specific override, when fetching the effective profile for that org, then the result reflects baseline rules plus explicit overrides; attempts to relax mandatory redactions (e.g., DOB) result in HTTP 400 and are rejected. - When a ruleset or override is updated via API, then the change propagates to enforcement services globally within 2 minutes and an audit record is created with actor, method, and diff summary.

Consent Ledger

Patient-centered consent workflows with clear prompts, scope summaries, and digital signatures. Track who approved what, for how long, and why—complete with revocation receipts and reminders before access expires.

Requirements

Patient-Friendly Consent Flow & e-Signature
"As a patient, I want a clear, step-by-step consent flow with an easy digital signature so that I can confidently approve or decline how my data is used."
Description

Design and implement a guided consent experience that uses plain-language summaries, progressive disclosure, and clear scope highlights to help patients understand what data will be used, by whom, for what purpose, and for how long. Include accessible UI (WCAG AA), multilingual content, and per-scope toggles with microcopy. Provide multiple digital signature capture methods (typed name, draw, native OS) with time, device, and geo/IP metadata. Store signed artifacts (rendered PDF/JSON, signature evidence package) linked to a unique consent ID. Integrate the flow at key entry points (onboarding, first telehealth session, enabling camera tracking, sharing data with a new clinician). Support guardian/representative signing where applicable and display consent version numbers and change logs. This requirement ensures clarity, reduces drop-off, and creates legally useful evidence while fitting smoothly into MoveMate’s existing patient journeys.

Acceptance Criteria
Plain-Language Summary with Progressive Disclosure
Given the consent summary is rendered in English, When measured by an automated check, Then the Flesch-Kincaid grade level is <= 8 and the screen displays labeled sections: Who, What, Why, How long with highlighted scopes. Given a user taps a “Learn more” control on any section, When the panel expands, Then the expanded content is visible, the toggle is accessible to screen readers, and the expand/collapse state persists when navigating forward and back within the consent flow. Given required acknowledgments for the summary are incomplete, When the user attempts to continue, Then the Continue action is disabled and an inline error links to the missing section; When all required acknowledgments are complete, Then the Continue action is enabled.
WCAG AA Accessibility for Consent Flow
Given the user navigates the consent flow with a keyboard only, When tabbing through all interactive elements, Then every control is reachable with a visible focus indicator, focus order follows the visual order, and there are no keyboard traps. Given a screen reader is enabled, When reading the consent screens, Then all interactive elements (including toggles and signature canvas) have meaningful labels and roles, headings follow a logical hierarchy, and dynamic messages announce via ARIA live regions. Given the UI is tested for contrast, When measuring text and interactive elements, Then contrast ratios meet WCAG 2.1 AA (text >= 4.5:1; non-text UI >= 3:1). Given the device text size is increased to 200%, When viewing the consent flow, Then all content is readable without cutoff or overlap and horizontal scrolling is not required for text. Given automated accessibility scanning runs (axe-core), When executed on the consent flow screens, Then there are 0 Critical and 0 Serious violations.
Multilingual Consent Content and Locale Switching
Given the device locale is set to English, Spanish, or French, When the consent flow loads, Then all consent texts (summaries, microcopy, errors, buttons) appear in the detected language without placeholder keys. Given a user manually changes language via the in-flow selector, When a new language is chosen, Then the consent UI updates immediately, the choice persists for the remainder of the flow and future visits, and date/time formats localize to the selected language. Given the consent content is audited, When checking translation coverage, Then 100% of consent strings are translated for EN, ES, and FR with no fallback to another language.
Per-Scope Consent Toggles with Purpose and Duration
Given the consent scopes are presented, When viewing each scope (e.g., Camera Tracking, Share with New Clinician, Telehealth Data Use), Then each scope shows adjacent microcopy stating purpose, data types involved, recipients, and retention duration. Given a scope is optional, When the consent screen renders, Then the scope toggle defaults to Off and can be turned On by the user; Given a scope is required, Then it is clearly labeled “Required” and cannot be turned Off. Given a user turns a scope Off, When attempting to use a dependent feature (e.g., enable camera tracking), Then the UI blocks the feature and displays an explanation referencing the disabled scope. Given all scope selections are made, When reaching the review step, Then a summary lists each scope with its selected state and associated purpose/duration before signature.
Signature Capture, Identity, and Guardianship Support
Given the signature step, When presented, Then the user can choose one of three methods: Type Name, Draw Signature, or OS-Native signature; unsupported OS-Native options are hidden. Given a user completes a signature method, When continuing, Then the system validates that all required scopes are acknowledged and at least one signature method is captured, otherwise an inline error prevents submission. Given a signature is submitted, When metadata is recorded, Then the evidence includes ISO 8601 timestamp with timezone, device model, OS version, app version, public IP, and coarse geolocation (city/region/country when permission is granted; otherwise geo_unavailable is recorded). Given the patient is a minor based on DOB or indicates a representative, When proceeding to sign, Then guardian flow is required: collect guardian full name, relationship, and contact; capture guardian signature; record patient name; and mark the signer role as Representative; the patient cannot self-sign in these cases.
Consent Artifact Generation and Tamper-Evident Storage
Given a consent is signed, When artifacts are generated, Then a rendered PDF and a machine-readable JSON are produced that include consent ID (UUIDv4), consent version, scopes granted/denied, signer identity and role (patient or representative), timestamp, and captured metadata. Given artifacts are created, When computing an evidence hash, Then a SHA-256 digest of the canonicalized JSON payload is stored with the record and embedded in the PDF. Given the artifacts are stored, When retrieving by consent ID via the API, Then the JSON and PDF are returned and the recomputed SHA-256 matches the stored digest; storage is encrypted at rest and access is authorized. Given typical load, When fetching an artifact by consent ID, Then p50 response time is <= 300 ms and p95 <= 800 ms.
Entry-Point Triggers, Versioning, and Re-Consent with Change Logs
Given app onboarding, When no active consent exists for core scopes, Then the consent flow is shown before completion of onboarding. Given the first telehealth session start, When no active consent exists for telehealth data use, Then the consent flow is shown prior to session join. Given a user enables camera tracking or shares data with a new clinician, When no active consent exists for the corresponding scope, Then the consent flow is shown at that moment; otherwise it is suppressed. Given the consent version increases, When a patient’s last signed version is older than the current version, Then the user is shown a change log summary highlighting changes to Who/What/Why/How long and must re-consent; prior artifacts remain retrievable. Given a patient has re-consented to the current version, When re-entering any entry point, Then the re-consent prompt is not shown again for that version; the change log remains accessible from the consent screen and is included in the PDF artifact.
Granular Scope & Purpose Selection
"As a clinician, I want to request only the minimum necessary data with explicit purposes, recipients, and duration so that patients understand and I maintain compliance with clinic policies."
Description

Provide a structured model and UI for defining consent scopes at a fine-grained level: data categories (e.g., raw camera frames, derived pose landmarks, rep counts, form-error flags, chat transcripts, appointment metadata), purposes (treatment, operations, QA, research), recipients (assigned clinician, clinic staff roles, named third parties), permissions (view, export, share), and duration (fixed date or relative term). Enable clinicians to assemble requests from reusable templates aligned to common treatment plans, with rationale fields and minimum-necessary defaults. Dynamically render the configured scopes into the patient-facing flow with human-readable summaries. Persist scopes as machine-readable policy tags for enforcement and reporting. This enables transparency for patients and precision for the system to enforce only what’s been approved.

Acceptance Criteria
Template-Based Scope Configuration with Minimum-Necessary Defaults
Given a clinician opens Request Consent and selects a reusable template When the template loads Then the default data categories are limited to derived pose landmarks, rep counts, form-error flags, and appointment metadata And raw camera frames and chat transcripts are excluded by default And default purposes include only treatment And default recipients include the assigned clinician and the clinic PT role And default permissions are view only; export and share are disabled And the UI displays a Minimum necessary badge And the Expanded scope flag remains false until a non-default item is added
Human-Readable Patient Summary Mirrors Configured Scopes
Given a consent request is configured When the patient-facing summary is rendered Then it lists data categories, purposes, recipients, permissions, and duration in plain language that maps 1:1 to the configuration And the summary includes the clinician’s rationale when provided And the reading level is at or below grade 8 per Flesch-Kincaid And no item appears in the summary unless it is present in the configuration And a consistency check confirms the rendered summary can be regenerated from the stored configuration without loss or mismatch
Machine-Readable Policy Tags Persisted for Enforcement and Reporting
Given a patient signs a consent request When the system saves the consent record Then machine-readable policy tags are stored containing data_category, purpose, recipient (role or entity ID), permission, and duration with a schema version And the tags are retrievable via API endpoint /consents/{id} And the enforcement service can evaluate a sample access request against these tags and return Allow or Deny accordingly And a report query can filter consents by purpose, recipient, and date range using the stored tags And the signed consent JSON is versioned; any change creates a new version with timestamp and editor in the audit log
Recipient, Purpose, and Permission Selection Constraints
Given a clinician configures recipients, purposes, and permissions When permissions are selected Then share cannot be enabled unless view is enabled, and export implies view And selecting research purpose disables raw camera frames by default and requires an explicit override And all named third-party recipients require organization name and contact email And only recipients available in the clinic directory or explicitly added third parties can be selected And validation errors are shown inline and block progression until resolved
Duration Selection with Fixed Date and Relative Term
Given a clinician sets consent duration When Fixed date is chosen Then only a future date can be selected and the chosen date appears in the summary When Relative term is chosen (e.g., 90 days from signature or until treatment plan completion) Then the UI previews the expected expiration date And upon patient signature the system resolves and stores the concrete start and expiration timestamps And the stored tags include both the relative rule and the resolved expiration timestamp
Rationale Capture for Expanded Scope Changes
Given a clinician modifies the default scope to add data categories, non-treatment purposes, export, or share When attempting to send the consent request Then a rationale text field is required with a minimum of 20 characters And the rationale appears in the patient summary under Why we’re asking And the rationale is persisted in the consent metadata and exposed in the audit log And sending is blocked until the rationale requirement is satisfied
Immutable Consent Ledger & Audit Trail
"As a compliance officer, I want an immutable, searchable record of all consent decisions with receipts so that I can demonstrate lawful and appropriate data use during audits."
Description

Create an append-only ledger that records every consent event (grant, update, renew, revoke, expire) with timestamps, actor identity, consent ID, scope snapshot, consent text version, rationale codes, and signature evidence references. Use tamper-evident techniques (hashed event chain and object storage integrity checks) and maintain write-once archival copies of receipts. Provide APIs and admin UI to query by patient, clinic, scope, date range, or recipient, and to export machine-readable (JSON) and human-readable (PDF) receipts. Link ledger entries to related system artifacts (telehealth sessions, data access logs) for end-to-end traceability. Apply configurable retention policies and access controls. This delivers defensible, transparent records for internal reviews and external audits.

Acceptance Criteria
Append-only event capture with hash chain
Given a consent event of type grant, update, renew, revoke, or expire is submitted When the event is written to the ledger Then the entry includes event_type, event_id, consent_id, timestamp (UTC ISO 8601), actor_identity, scope_snapshot, consent_text_version, rationale_codes, signature_evidence_ref, previous_event_hash, event_hash And event_hash equals the configured hash of the canonicalized event payload concatenated with previous_event_hash And the write is append-only; no existing entry is modified or deleted And an immutable archival copy of the receipt is created and marked write-once with retention metadata And the object store integrity checksum of the archival copy matches the stored checksum
Immutability enforcement and tamper-evidence validation
Given an existing ledger entry When any API or UI attempt is made to update or delete the entry Then the operation is rejected with HTTP 405 or 409 and no data changes occur And a security audit log entry is recorded with requester identity and denied action Given the chain verification job runs against the ledger When a ledger entry is altered out-of-band or a link in the hash chain is broken Then the job reports the first failing event_id and the expected vs actual hash, returns a non-zero status, and raises a critical alert
Mandatory fields and per-event-type rules
Given a consent event is recorded When the event is of type grant or renew Then signature_evidence_ref is present and resolvable and rationale_codes include a valid basis code And actor_identity is an authenticated human or system user When the event is of type update Then scope_snapshot and consent_text_version differ from the previous event for the same consent_id and include a rationale_code of 'scope_change' or 'text_update' When the event is of type revoke Then rationale_codes include 'revoked_by_patient' or 'revoked_by_clinic' and signature_evidence_ref is present if revoked by patient When the event is of type expire Then actor_identity is 'system', rationale_codes include 'expiration', and the timestamp is greater than or equal to the configured expiry time
Query by patient, clinic, scope, date range, and recipient
Given the ledger contains events across multiple patients, clinics, scopes, recipients, and dates When the API is called with any combination of filters patient_id, clinic_id, scope_contains, recipient_id, and date_range Then only matching entries are returned with default sort by timestamp descending And results include total_count, page_size, page_number, and a next_page token when applicable And a request returning up to 10,000 matching entries completes within 2 seconds at P95 under nominal load And the Admin UI renders the same filtered results and allows export from the filtered set
Export receipts in JSON and PDF with integrity
Given a ledger entry or a set of entries is selected for export When JSON export is requested Then the system returns a machine-readable file containing all fields plus event_hash, previous_event_hash, and a receipt_checksum When PDF export is requested Then the system returns a human-readable receipt including event summary, scope snapshot, consent text version, actor identity, timestamp, rationale codes, and signature evidence reference And both exports embed or accompany a checksum file and a verification instruction string And an immutable archival copy of each exported receipt is stored and linked from the ledger entry
Traceability links to related artifacts
Given a ledger entry references related telehealth sessions and data access logs When the entry is retrieved via API or viewed in the Admin UI Then the response includes resolvable URIs to the related artifacts and their types And following a link opens the artifact or returns 404 if the artifact has been lawfully purged, in which case the ledger displays a 'reference expired by retention policy' marker And the ledger entry remains intact regardless of the state of the related artifacts
Retention policies and access controls
Given clinic-specific retention policies are configured by event_type and clinic_id When an entry reaches its retention end date Then the system prevents further export of protected content, writes a RetentionExpired tombstone event linked in the chain, and moves any large attachments to cold storage while preserving metadata And entries cannot be deleted before retention end and any post-retention destruction writes a DestructionProof record containing a hash of the destroyed object and timestamp Given user roles viewer, auditor, admin, and clinic_owner When an authenticated user requests access Then only authorized roles for the tenant can view or export entries and all access requests are logged; unauthorized attempts return HTTP 403
Real-Time Consent Enforcement
"As a clinician and system user, I want data access to be automatically allowed or blocked based on the patient’s current consent so that I don’t accidentally view or process unapproved information."
Description

Integrate the consent ledger with an authorization layer that evaluates every data access and processing request against active consent scopes and durations. Enforce data minimization by allowing only the approved categories and permissions (e.g., permit rep counts and form flags while discarding raw frames if not consented). Block or mask access when consent is missing or expired and surface actionable messages with links to initiate or renew consent. Provide low-latency policy checks with short-lived caches, log all enforcement decisions, and expose metrics/alerts for denied requests. Support policy exceptions with justification workflows when enabled by clinic policy. Connect this layer to MoveMate services (CV ingestion, clinician dashboards, exports, analytics jobs) to ensure consistent behavior across the platform.

Acceptance Criteria
CV Ingestion Data Minimization
Given an active consent that includes derived metrics (rep counts, form flags) but excludes raw video frames When the CV ingestion service processes a live session Then only derived metrics are persisted and made available AND no raw frame is written to disk, cached beyond process memory, or transmitted to other services AND the enforcement log records decision=allow for metrics and decision=deny for raw_frames with reason=scope_not_granted. Given an active consent that includes raw video frames within a defined duration window When the CV ingestion service attempts to store a frame inside the consent window Then the frame is stored according to retention policy AND the enforcement log records decision=allow for raw_frames with the active consent_id and policy_version. Given an expired consent or no consent When the CV ingestion service receives data Then the request is denied and all content is dropped before persistence AND the service emits a consent_required event for the patient with a link_id to initiate consent.
Cross-Service Consent Evaluation and Latency SLA
Given any data access request from CV ingestion, clinician dashboards, exports, or analytics When the authorization layer evaluates consent Then the decision is returned with p95 latency ≤ 50 ms and p99 latency ≤ 150 ms over any 5-minute window, measured at the caller boundary. Given identical subject, scopes, and timestamp across different services When decisions are evaluated within a 1-second window Then the outcomes are identical (allow/deny/mask) across services and match the current consent state. Given a request whose required scope is missing or consent is expired When evaluated Then the outcome is deny (or mask where supported) and the caller receives a machine-readable error code (e.g., consent_missing or consent_expired) and a human-readable message.
Expired/Missing Consent User Messaging
Given a clinician viewing a patient dashboard without active consent for the requested data When the dashboard attempts to render gated data Then the gated sections are masked and a banner appears with a "Request Consent" CTA that deep-links to the consent flow for the specific scopes. Given a user initiating a data export requiring scopes not granted When the export is requested Then the export is blocked and an error toast/modal displays with a "Request/Update Consent" link and the missing scopes enumerated. Given a patient in-app attempting to review raw session media without consent for raw frames When the media view loads Then access is blocked and a message explains the need for consent with a "Review Permissions" link.
Consent Revocation Propagation
Given a patient revokes one or more scopes in the consent ledger When the revocation is saved Then all caches are invalidated and enforcement decisions across all services reflect the change within 10 seconds. Given active background analytics jobs relying on a revoked scope When the revocation occurs Then those jobs are halted before the next processing batch and any queued items violating the new consent are dropped without persistence. Given a revocation event When processed Then a revocation receipt is generated, and the audit log records previous scopes, new scopes, timestamp, actor, and affected services.
Policy Exception Workflow and Controls
Given clinic policy disables exceptions When a user attempts to create an exception Then the request is rejected with error code exceptions_disabled and is logged. Given clinic policy enables exceptions When a requester submits an exception Then a non-empty justification, explicit scopes, and an expiry (≤ 24 hours by default) are required, and approval by an authorized reviewer (not the requester) is enforced. Given an approved exception is active When enforcement decisions are made Then only the exception’s scopes are allowed within the exception window, all decisions are tagged with exception_id, and alerts are triggered if exception usage exceeds 100 requests or 1 hour remaining without review. Given an exception expires or is revoked When the next request is evaluated Then normal consent rules apply and further access under the exception is denied.
Enforcement Logging, Metrics, and Alerts
Given any enforcement decision When recorded Then the log entry contains timestamp, request_id, subject_id (pseudonymized), service, requested_scopes, decision (allow/deny/mask), reason_code, policy_version, evaluator_latency_ms, and exception_id (if any), with payloads redacted. Given production traffic When aggregated metrics are emitted Then per-service counters (total, allowed, denied, masked) and latency histograms are exposed to monitoring every 60 seconds. Given a spike in denied requests When the denied rate exceeds 10% for 5 consecutive minutes per service Then an alert is fired to on-call with the top reason_codes. Given latency degradation When p95 evaluator latency > 50 ms for 5 minutes or p99 > 150 ms for 5 minutes Then an alert is fired and a diagnostic snapshot of the policy cache hit rate is captured.
Cache TTL and Invalidation Coherence
Given normal operation When consent policies are cached Then cache TTL is ≤ 5 seconds and hit/miss rates are observable. Given a consent create/update/revoke event When processed by the system Then cache invalidations propagate to all nodes within 3 seconds and subsequent decisions reflect the change. Given an expired consent When evaluated after TTL + 3 seconds Then no allow decisions are issued based on stale cache; any stale allow is logged as a defect. Given network partition affects cache invalidation When detected Then the system temporarily bypasses cache for affected subjects until consistency is restored.
One-Tap Revocation & Propagation with Receipts
"As a patient, I want to revoke specific consents instantly and receive a receipt so that I remain in control of my data at all times."
Description

Allow patients to revoke all or selected consent scopes instantly from their profile or from contextual screens (e.g., a clinician’s share card). On revocation, immediately update the ledger, invalidate active tokens/permissions, cancel scheduled processing jobs, and notify affected recipients and clinicians. Generate revocation receipts (PDF/JSON) with timestamps and scope details and deliver confirmations via the patient’s preferred channels. Reflect revocation state in clinician dashboards and patient timelines. Provide optional reason capture and guardrails to prevent accidental revocation (clear confirm step), while ensuring the action remains fast and reversible only by obtaining fresh consent.

Acceptance Criteria
One-Tap All-Scopes Revocation from Patient Profile
Given a logged-in patient with at least one active consent scope And the patient is on Profile > Consents When the patient taps "Revoke all" and a confirmation modal summarizes effects and scopes And the patient confirms Then a revocation event is appended to the consent ledger within 1 second including patient ID, actor=patient, timestamp (UTC ISO-8601 ms), affectedScopes=ALL, and optional reason And the UI displays a success state within 2 seconds And the action cannot be undone from the UI; only fresh consent can reinstate access
Contextual Partial Revocation from Clinician Share Card
Given a clinician share card shows multiple active scopes for that clinician When the patient selects one or more scopes and taps "Revoke selected" And confirms the selection in the guardrail modal Then the ledger records revocation entries for the selected scopes only within 1 second And non-selected scopes remain active and visible as Active And the share card immediately reflects the revoked scopes as Revoked without requiring a full page refresh
Immediate Ledger Update and Token Invalidation
Given one or more services hold active tokens/permissions for the patient within the scopes being revoked When the revocation is confirmed Then all access tokens/permissions for the revoked scopes are invalidated globally within 5 seconds And subsequent API calls using those tokens are denied with HTTP 401 or 403 and error code CONSENT_REVOKED And a revocation event is published to downstream services within 5 seconds And access can only be restored by obtaining fresh consent; attempts to reuse prior tokens or auto-reinstate fail with CONSENT_REQUIRED
Cancellation of Scheduled Processing Jobs
Given there are queued or scheduled processing jobs referencing the scopes being revoked When the revocation is confirmed Then all queued and scheduled jobs for those scopes are canceled within 10 seconds and do not execute And a cancellation audit record is added with jobId, scope, timestamp, and reason=CONSENT_REVOKED And creating new jobs for revoked scopes is blocked until fresh consent exists, returning error CONSENT_REVOKED
Revocation Receipts (PDF/JSON) Generated and Accessible
Given a revocation has been confirmed When the system generates receipts Then both PDF and JSON receipts are produced within 10 seconds containing: ledgerEntryId, patientId (masked), actor, UTC timestamp, revoked scopes, recipients/clinicians affected, and optional reason And receipt filenames follow RevocationReceipt_{ledgerEntryId}_{UTCDate}.pdf|.json And receipts are accessible via an authenticated link in the app for at least 30 days; links expire after 30 days
Delivery of Confirmations and Notifications via Preferred Channels
Given the patient has notification preferences (email/SMS/push) and clinicians/recipients have contact routes When the revocation completes Then the patient receives a confirmation message including effective timestamp and receipt link via each enabled channel within 2 minutes And each affected clinician/recipient receives a notification within 2 minutes listing patient identifier (clinic-facing), revoked scopes, and effective timestamp And undeliverable notifications are retried with exponential backoff for up to 24 hours and are logged with delivery status
Dashboard and Timeline State Reflection Post-Revocation
Given a revocation occurred When a clinician views their dashboard or the patient chart Then the revoked scopes display as Revoked with effective UTC timestamp within 10 seconds and gated features are disabled And the patient timeline shows a Revocation entry with scopes, timestamp, and links to receipts And attempting actions requiring revoked scopes prompts a re-consent flow rather than proceeding
Expiration Scheduling & Renewal Notifications
"As a patient, I want timely reminders before my consent expires and a quick way to renew so that my care and exercise tracking are not disrupted."
Description

Enable consent durations with flexible terms (fixed end date or relative periods such as 90 days or per-session). Schedule reminder notifications to patients and clinicians ahead of expiry (e.g., 30/7/1 days) via push, email, or SMS with localization and quiet hours. Provide deep links into a streamlined renewal flow that preloads prior scopes and highlights any changes. On expiry, automatically suspend affected access and surface clear status indicators across dashboards and workflows. Ensure timezone-aware scheduling, idempotent sends, retry logic, and rate limiting to prevent notification fatigue. Track outcomes (renewed, declined, lapsed) for reporting and funnel optimization.

Acceptance Criteria
Duration Types: Fixed Date and Relative Period
Given a clinician creates a consent with duration_type fixed and selects an end date/time in the patient's timezone When the consent is saved Then the system stores duration_type=fixed and end_at as an ISO-8601 UTC timestamp with the originating timezone captured And dashboards display the expiry using the patient's local timezone with the correct offset And the audit log records duration_type, configured end_at, and timezone And when a clinician instead selects duration_type=relative with a value of 90 days Then the system computes end_at as created_at + 90 days in the patient's timezone and stores the UTC equivalent And dashboards and audit log reflect the computed expiry consistently
Per-Session Consent Lifecycle
Given a consent with duration_type=per-session exists for a patient When a new clinical session begins Then the consent status becomes Active for the duration of that session And when the session ends Then the consent status becomes Expired and all access gated by that consent is suspended And no pre-expiry reminders are scheduled or sent for per-session consents And the audit log records session start, session end, activation, and expiry events
Pre-Expiry Reminders: Channels, Localization, Quiet Hours
Given a consent with expiry at T and a reminder schedule of 30, 7, and 1 days before expiry And the patient and assigned clinician have channel preferences (push, email, SMS), locale, timezone, and quiet hours configured When the scheduler evaluates reminders Then reminders are queued for both patient and clinician at T-30d, T-7d, and T-1d in each recipient's local timezone And if a scheduled time falls within the recipient's quiet hours, the send is deferred to the next allowed sending window And message content is localized to the recipient's language and formats dates, times, and numbers per their locale And the system attempts delivery via the preferred channel first with fallback to the next configured channel on failure And each reminder includes a secure renewal deep link specific to the recipient and consent
Renewal Deep Link: Prefilled Scopes and Change Highlights
Given a recipient opens a valid renewal deep link for an expiring consent and successfully authenticates When the renewal screen loads Then prior consent scopes, purposes, and duration are prefilled And any changes since the last signed consent are clearly highlighted before confirmation And the recipient can renew or decline within a streamlined flow not exceeding three steps And on renew, a new consent version is created, the prior consent is superseded, and the new expiry is set per the selected term And the deep link token is single-use and expires after a configurable TTL; expired or reused tokens redirect to a secure re-auth path without exposing consent data And all actions are recorded in the audit log with actor, timestamp, and outcome
Auto-Suspend on Expiry and UI Status Indicators
Given a consent reaches its expiry time without renewal When the system processes the expiry Then all reads and writes that require that consent are blocked with a consent_expired error code And patient and clinician dashboards display a prominent Expired status with contextual guidance and a renewal call-to-action And worklists and workflows surface the expired state and block actions that require the expired consent And the audit log captures the automatic suspension event with affected scopes and systems
Outcome Tracking and Funnel Metrics
Given reminders are sent and recipients interact with renewal flows When a recipient renews, declines, or takes no action until expiry Then the system records the outcome as renewed, declined, or lapsed with timestamps, channel, and consent ID And click-throughs, opens, deliveries, bounces, and failures are tracked by reminder wave and channel And reporting aggregates funnel metrics by cohort (clinic, therapist, program), channel, and time window And exports are available via API and CSV with filters for date range, status, channel, and cohort
Notification Delivery: Idempotency, Retries, Rate Limiting
Given the reminder scheduler runs and downstream senders may experience transient failures When a reminder is enqueued or retried Then the system ensures idempotency so that at most one message per recipient, channel, and reminder wave is sent And transient failures are retried up to 3 times with exponential backoff and jitter And per-recipient rate limits of max 2 reminder messages per 24 hours and 5 per 7 days across channels are enforced; excess sends are suppressed and logged And all retries, suppressions, and deduplications are logged for observability

Audit Guard

Tamper-evident activity logs and anomaly alerts that show who viewed or exported data, when, and from where. Catch overreach fast, satisfy compliance audits, and build trust with transparent, searchable access histories.

Requirements

Tamper-evident Audit Log Ledger
"As a compliance officer, I want audit logs to be tamper-evident so that I can prove integrity during audits and detect any manipulation."
Description

Implement an append-only, tamper-evident audit ledger for all sensitive actions across MoveMate (mobile apps, web dashboard, and backend services). Each event is cryptographically chained (per-tenant hash chain and periodic Merkle root anchoring) and stored on write-once (WORM) media with server-side encryption. Provide integrity verification routines and a public checksum API to validate a log segment’s authenticity. Enforce time synchronization (NTP), monotonic sequencing, and detect/write flags for clock skew or out-of-order events. Log coverage includes data view, edit, export, delete, authentication, session start/stop, role changes, configuration changes, and telemetry consent updates. Integrity verification jobs run daily and on-demand, with alerts on any mismatch. All verification results and administrative interactions with the ledger itself are audited.

Acceptance Criteria
Per-Tenant Append-Only Hash Chain
Given tenant T with an initial chain head When events E1..En are appended from any MoveMate component Then each event Ei contains prev_hash equal to hash(Ei-1) for tenant T And the per-tenant sequence_number increments by 1 without gaps And any attempt to update or delete a prior event is rejected by storage and results in an audit event ledger_write_rejected And running the chain verification over E1..En returns status=OK And modifying any stored byte in Ek causes verification to return status=FAIL and emits a tamper_detected alert referencing Ek.id
Periodic Merkle Root Anchoring
Given a configured anchoring interval I When the anchoring job runs Then the system computes a Merkle root over each tenant’s chain segment for the interval And persists the root and its timestamp to WORM storage with a digital signature And exposes the anchor_id in verification metadata And any later inclusion proof for an event created before the anchor verifies against that anchor And if anchor computation or signature verification fails, the job marks the anchor failed, writes an audit event anchor_failed, and sends an alert
WORM Storage with Server-Side Encryption
Given the audit ledger storage is configured as WORM with server-side encryption When an audit event is persisted Then the write succeeds only as an append And overwrite or delete operations before retention expiry are denied by the storage layer And all denied operations produce an audit event storage_mutation_denied with actor and origin And the record has SSE enabled with KMS key id recorded in metadata And reading the event returns the original bytes and integrity checksum
Public Checksum API for Segment Validation
Given an authenticated client with tenant-level read scope When they request a checksum for a segment by time or sequence range Then the API returns 200 with: segment_start, segment_end, segment_hash, proof_to_anchor, anchor_id, and signature And the signature validates against the published public key And providing an invalid range returns 400 with error code invalid_range And an unauthenticated request returns 401 and no body data And the returned proof verifies the segment against the anchor using the documented algorithm
Time Sync, Monotonic Sequencing, and Skew Flags
Given all services are configured to sync time via NTP with a configured skew threshold S When events are created across nodes Then each event includes utc_timestamp and monotonic sequence_number per tenant And if local clock drift exceeds S, the event is written with skew_flag=true and skew_delta recorded And if an event arrives out of order, out_of_order_flag=true and a correction marker is appended And a persistent skew condition beyond the configured duration triggers an alert
Sensitive Action Coverage Across Platforms
Rule: For each action in {data_view, data_edit, data_export, data_delete, authentication, session_start, session_stop, role_change, configuration_change, telemetry_consent_update} occurring in mobile apps, web dashboard, or backend services, an audit event is recorded Rule: Each event includes action_type, actor_id or service_id, subject_id (if applicable), tenant_id, utc_timestamp, sequence_number, origin_ip, user_agent (if applicable), request_id/correlation_id, outcome, prev_hash, and hash Rule: Events are emitted synchronously on success path or with guaranteed delivery on retry within configured SLA Rule: Missing emission or schema validation failure triggers an alert and a compensating log event audit_emit_failed
Daily and On-Demand Integrity Verification and Alerting
Given the daily verification schedule and admin on-demand endpoint When verification runs Then it recomputes per-tenant chain integrity and Merkle proofs for the targeted segments And records a verification_result event with status, checked_range, and anchor_id And any mismatch produces verification_failed event and sends a critical alert to on-call And invoking on-demand verification via API performs the same checks and returns a machine-readable report And all admin interactions (start, cancel, configure) with the ledger are themselves audited
Comprehensive Access Event Capture
"As a clinic admin, I want every access and export event captured with context so that I can reconstruct who did what, when, and from where."
Description

Capture a complete, standardized access event record for every read/write/export on PHI/PII and clinic data. Each event includes: user ID, role, tenant/clinic, patient/resource ID, action (view/edit/export/delete), timestamp (UTC), request origin (public IP, approximate geo, reverse DNS), device type (mobile/web), OS/app version, auth method (SSO/password), session ID, correlation ID, success/failure, and optional justification/reason code. Instrument mobile SDKs, web app, and APIs to emit events with offline buffering and guaranteed at-least-once delivery. Ensure coverage for rep-detection session views, clinician dashboards, batch exports, and admin configuration pages. Normalize events to a common schema, redact sensitive payload fields, and enforce PII minimization while retaining investigative value.

Acceptance Criteria
Standardized Access Event Schema Completeness
Given any read, write, export, or delete action on PHI/PII or clinic data When the request is processed by any MoveMate component (mobile SDK, web app, or API) Then at least one access event is recorded for the action in the centralized audit stream And the event validates against the common JSON schema v1 with required fields: user_id, role, tenant_id, patient_id or resource_id, action in {view, edit, export, delete}, timestamp_utc (ISO8601), request_origin.ip, request_origin.geo (approx), request_origin.reverse_dns (nullable), device_type in {mobile, web, service}, os_version/app_version, auth_method in {SSO, password, token}, session_id, correlation_id, outcome in {success, failure}, justification_code (nullable) And no required field is null, empty, or malformed And timestamp_utc is within ±2 seconds of server receipt time And events do not include PHI/PII payload values beyond identifiers
Cross-Platform Instrumentation Coverage (Mobile/Web/API)
Given a test tenant with a clinician user, an admin user, and an API service principal When the same patient record is viewed via (a) mobile app, (b) web app, and (c) API client Then an access event is recorded for each platform within 2 seconds of the action completing And device_type is mobile for (a), web for (b), and service for (c) And auth_method reflects the login method used (SSO/password for humans, token for service) And role and tenant_id match the authenticated principal for each platform And session_id and correlation_id are populated for each event
Offline Buffering and At-Least-Once Delivery
Given a mobile device that is offline and a user performs 20 actions that generate access events over 10 minutes When the app is force-closed and relaunched while still offline, then later reconnects to the internet Then the client buffers all 20 events locally without loss across app restarts And upon reconnection, all buffered events are delivered to the server within 120 seconds And the server contains at least 20 corresponding events with the original client-side timestamp_utc and session_id values And at-least-once delivery is satisfied (duplicates permitted) with no missing events
Critical Workflow Coverage: Rep Sessions, Dashboards, Exports, Admin Config
Given typical therapist and admin workflows When a therapist views a patient’s rep-detection session summary Then an access event is recorded with action=view and resource_id referencing the session summary When a clinician opens the patient dashboard Then an access event is recorded with action=view and patient_id populated When an admin initiates a batch export Then an access event is recorded with action=export and resource_id referencing the export job When an admin updates a clinic configuration setting Then an access event is recorded with action=edit or delete as appropriate and resource_id referencing the setting changed And each event includes outcome and optional justification_code if provided
Origin, Geo, Reverse DNS, Device, and Auth Attribution Accuracy
Given controlled test requests from known public IPs and user agents When actions are performed from (a) a mobile device on cellular, (b) a web browser behind a corporate NAT with resolvable reverse DNS, and (c) a server-to-server API client from a known IP without reverse DNS Then request_origin.ip equals the observed public IP for each case And request_origin.geo resolves to the expected country and region for each IP And request_origin.reverse_dns is populated for (b) and null for (c) And device_type, os_version, and app_version reflect the actual client in each case And auth_method matches the authentication mechanism used And role and tenant_id map to the authenticated principal
Redaction and PII Minimization Policy Enforcement
Given an action whose request or response contains PHI/PII (e.g., demographics, clinical notes, export file contents) When the corresponding access event is recorded Then no PHI/PII free-text payloads or file contents are stored in the event And only allowed identifiers and metadata per minimization policy are retained (e.g., patient_id, resource_id, action, timestamps) And sensitive values originating from headers, bodies, or query parameters are redacted or hashed according to policy And justification_code is captured only when explicitly provided and contains no free-text PHI
Anomaly Detection & Alerting
"As a security analyst, I want real-time alerts on suspicious access patterns so that I can investigate and stop potential overreach quickly."
Description

Provide real-time and scheduled detections for suspicious access patterns with configurable, per-tenant rules and baselines. Out-of-the-box rules include: bulk export spikes, access outside clinic hours, geolocation anomalies, access to non-assigned patients, repeated failed access attempts, disabled logging attempts, and rapid-fire record views. Support thresholds, allow/deny lists, sensitivity tuning, and temporary suppressions with expiration. Deliver alerts via in-app notifications, email, SMS, and webhooks for SIEM/SOAR. Include alert lifecycle (assign, acknowledge, comment, resolve), audit every alert action, and link alerts to the underlying events for one-click investigation. Provide false-positive feedback to refine models over time without impacting log integrity.

Acceptance Criteria
Real-Time Geolocation Anomaly Alert
Given per-tenant geolocation baselines are computed for the last 30 days and allow/deny lists are configured When a user session starts from a geolocation outside the baseline threshold or on the deny list and not on the allow list Then a High severity alert is created within 5 seconds including user ID, IP, GeoIP, confidence score, baseline comparison, and a link to the underlying events And alerts are delivered via in-app, email, SMS, and an HMAC-SHA256 signed webhook with up to 3 retries using exponential backoff And identical alerts for the same user and IP are suppressed for 10 minutes unless severity increases And adding the source to the allow list prevents future geo anomaly alerts for that source and records actor and timestamp with expiration if provided
High-Volume Access Anomalies (Exports and Rapid Views)
Given per-tenant thresholds and sensitivity for bulk exports and rapid record views are configured, and a daily scheduled detection job is enabled When export count in any rolling 15-minute window exceeds the tenant baseline by at least X% or exceeds absolute threshold N Then a High severity alert is generated and delivered via all channels including webhook with request signing, containing export IDs, counts, time window, and link to the event batch When a user views more than V distinct patient records within R seconds or exceeds the moving average by at least Y standard deviations Then a Medium or High alert is generated based on configured sensitivity and throttled to at most one alert per user per 10 minutes And temporary suppression can be applied per rule/user for up to 24 hours with explicit expiration and reason, after which detection resumes automatically
After-Hours Access Detection
Given tenant clinic hours and time zone (including optional holiday exceptions) are configured When a non-emergency access occurs outside the configured clinic hours Then a Medium severity alert is created within 10 seconds including local time context, user, resource, and a link to the events, and is delivered via in-app and email by default with optional SMS/webhook per tenant settings And on-call/emergency accesses by users with the On-Call role are excluded from alerts and the exclusion is audited And a temporary suppression window (start/end) can be created per user or role; it auto-expires and the action is audited
Non-Assigned Patient Access Detection
Given patient-to-clinician assignment lists and role-based exceptions (e.g., Supervisors) and allow lists are configured per tenant When a user without an exception accesses a patient not assigned to them Then an alert is created within 10 seconds including pseudonymized patient ID, user ID, reason, and links to the underlying events, and delivered via in-app and webhook at minimum And if the number of unique non-assigned patients accessed by the user exceeds K within T minutes, the alert severity escalates to High and SMS is sent to on-call if configured And adding a time-bound access exception for the user–patient pair stops future alerts for that pair until expiration and the change is fully audited
Repeated Failed Access Attempts Escalation
Given a per-tenant rule defines threshold F failed access attempts within M minutes with source fingerprinting (user, IP, device) When failed access attempts meet or exceed the threshold for any user or source within the window Then an alert is created including affected user/source, counts, timestamps, and links to events, and delivered via all channels And if attempts continue beyond 2F within 2M or originate from at least S distinct IPs, severity escalates and an immediate SMS is sent to the on-call contact And identical alerts are deduplicated for 15 minutes per user/source while counters and context continue updating within the open alert
Disabled Logging Attempt Detection
Given audit logging configuration changes are themselves immutably logged and monitored per tenant When any actor attempts to disable or reduce audit logging via UI, API, or infrastructure signal Then a Critical severity alert is created within 5 seconds and delivered via in-app, email, SMS, and a signed webhook with retries, including actor identity, method (UI/API), target setting, previous and attempted value, and links to the config change events And any suppression or allow/deny list change applied to this rule is recorded with actor, timestamp, and optional expiration
Alert Lifecycle, Linkage, and False-Positive Feedback
Given alert lifecycle states (New, Assigned, Acknowledged, Resolved) and permissions are configured per tenant When an alert is created it starts as New and includes one-click links to the underlying events Then authorized users can assign, acknowledge, comment, and resolve the alert; each action writes an immutable audit entry with actor, timestamp, prior state, and comment (if provided) And marking an alert as False Positive updates tenant-specific baselines or model parameters within 24 hours without modifying historical logs or events, and records who marked it and why And resolved alerts remain searchable and exportable with filters (type, severity, user, patient, date, status) and retain full action audit trails
Audit Explorer & Query UI
"As a compliance officer, I want a searchable interface to explore audit events so that I can answer audit questions within minutes."
Description

Build a responsive, role-gated UI to search, filter, and visualize audit events across MoveMate. Features: time-range picker, facets (user, role, clinic, patient, action, source app, outcome), free-text search, saved searches, and shareable, permissioned views. Provide event timelines, user-centric and patient-centric pivots, and drill-down to event details with device/IP metadata. Enable export of query results (CSV/JSON) with embedded checksums and query parameters for reproducibility. Redact sensitive fields by role and watermark exports with requester identity and timestamp. Ensure sub-second filtering on typical query windows (last 30 days) and graceful performance for long-range searches via pagination and background indexing jobs.

Acceptance Criteria
Facet Filtering and Sub-Second Results (Last 30 Days)
Given an authorized user on Audit Explorer and a time range of Last 30 Days When they apply facets user=U123, role=Therapist, clinic=CL1, patient=P42, action in {View, Export}, source app=Web, outcome=Success Then only events matching all selected facets are returned and the result count equals the number of rows displayed Given any filter combination within Last 30 Days (<= 30 d window) When the filter is applied Then p95 server response time <= 1000 ms and p95 end-to-end UI update time <= 1200 ms measured over 100 queries Given a filter that yields zero matches When applied Then the grid shows 0 results with an empty-state message and no stale rows from prior queries Given the Last 30 Days shortcut is selected When the page loads in a timezone with DST Then the start and end boundaries are computed in the user’s local timezone and include the entire end day Given multiple values are selected within the same facet When the query runs Then the logic is OR within the facet and AND across different facets Given the user clears all filters When the query runs Then results revert to the unfiltered set for the selected time range and the total count is updated
Free-Text Search Across Audit Fields
Given a keyword "export" When entered in the search box and submitted Then events whose searchable fields contain "export" (case-insensitive, diacritic-insensitive) are returned Given a phrase query "failed export" When wrapped in quotes and submitted Then only events with the exact phrase are returned Given free-text search is combined with facets and a time range When submitted Then results reflect the intersection of all conditions Given special characters or excess whitespace are entered When submitted Then the input is safely sanitized, no errors are thrown, and matching semantics ignore extra whitespace Given a query with no matches When submitted Then 0 results are shown with an empty-state message Given a result set exceeding one page When viewing results Then pagination controls are present and functional
Saved Searches and Permissioned Sharing
Given a defined query (time range, facets, free-text) When the user saves it with a unique name Then it appears in the user’s Saved Searches list with createdAt and lastRun timestamps Given an existing saved search When the user renames or updates parameters and saves Then changes persist and are visible on reload Given a duplicate name is used When saving Then the user is prompted to overwrite or choose a different name; no silent collisions occur Given a saved search When the user shares it with specific users or roles Then recipients can open it only if authenticated and authorized; results are limited to their own data entitlements Given a shared search When the owner revokes access or sets an expiry that has passed Then recipients lose access and opening the link returns an authorization error Given a saved or shared search is executed When run by any user Then current redaction rules are applied and no additional fields are exposed compared to the UI
User and Patient Pivots with Timeline
Given any query result When switching to the User pivot Then events are grouped by user with per-user totals and a chronological timeline per user Given any query result When switching to the Patient pivot Then events are grouped by patient with per-patient totals and a chronological timeline per patient Given timeline view is open When viewing events across timezones Then timestamps are displayed in the user’s local timezone and sort strictly ascending within each lane Given a user or patient group is clicked When drill-filtering Then the flat results list updates to that entity and counts match between pivot and list Given a pivot computation within the Last 30 Days window When switching views Then p95 pivot render time <= 1500 ms
Event Drill-Down with Device/IP Metadata and Redaction
Given a results row is opened When viewing the details panel Then the panel shows at minimum: event ID, timestamp, actor user and role, clinic, patient (if applicable), action, outcome, source app, device type, OS, browser, IP address, device/IP geo (if available), and request ID Given a user without elevated privileges When viewing details Then sensitive fields are redacted per policy (e.g., IP masked to /24, patient identifier partially masked) and a redaction indicator is shown Given a user with elevated privileges When viewing details Then unredacted fields are shown according to role policy Given a user attempts to open a detail outside their scope When the details endpoint is called Then a 403 is returned and no existence-revealing metadata is leaked Given redaction rules applied in the UI When exporting the same query Then fields redacted in the UI are redacted identically in the export
Export CSV/JSON with Checksums and Watermark
Given any query result When exporting as CSV Then the file contains visible columns plus permitted metadata, embeds query parameters and generation timestamp, and includes a watermark with requester identity and timestamp Given any query result When exporting as JSON Then the file includes a meta block (query parameters, generation timestamp, requester identity) and a data array; a SHA-256 checksum of the data payload is included in meta Given the checksum in the export When recomputed locally over the documented payload Then it matches the embedded checksum value Given export volume exceeds the synchronous threshold When export is requested Then an async job is created, the user is notified on completion, and the download link expires after a configurable TTL Given an export is generated When a reviewer re-runs the embedded query parameters for the same time window and permissions Then the resulting dataset matches the export (modulo new events outside the captured end timestamp) Given rate limits are exceeded When repeated exports are triggered Then the user receives a rate limit message and the system remains stable
Long-Range Searches via Pagination and Background Indexing
Given a time range > 30 days or a result set exceeding the immediate processing threshold When the query is submitted Then the server responds within 10 seconds with an initial page or job status, and the UI remains responsive Given a long-running query When background indexing is initiated Then a progress indicator shows job status, the user can navigate away and later resume, and partial results are paginated as they become available Given pagination is enabled When navigating pages Then page size is consistent, next/previous controls work, and the total count reflects the entire query Given the user cancels a long-running query When cancel is confirmed Then the background job is terminated and no further processing occurs Given network disruptions during a long-range query When connectivity is restored Then the UI retries gracefully and resumes from the last known page or job checkpoint without duplicating events
Compliance-grade Reporting & Exports
"As an auditor, I want signed, exportable audit reports for specific users, patients, or periods so that I can document compliance without manual compilation."
Description

Provide one-click generation of audit reports for a user, patient, clinic, or time window, suitable for compliance reviews. Output signed PDFs and machine-readable CSV/JSONL with file-level checksums, time coverage, query parameters, and completeness indicators (e.g., dropped-event count = 0). Include chain-of-custody section, report footer with signature time and signer identity, and optional branding. Support scheduled delivery to authorized recipients, encryption-at-rest and in-transit, and password-protected archives. All report generation and downloads are logged. Include a verification utility to validate report integrity against the ledger. Respect tenant data residency and redaction rules by role.

Acceptance Criteria
One-Click Scoped Audit Report Generation
- Given an authorized clinic admin selects a scope (user, patient, clinic, or time window) and clicks "Generate," When the event store contains ≤10,000 matching audit events, Then the system produces the complete report artifact set within 60 seconds. - Given a scope that yields >10,000 events, When "Generate" is clicked, Then the job is queued, progress is shown within 5 seconds, and the report completes within 15 minutes for ≤1,000,000 events. - Then the report includes only and all events that satisfy the selected scope and query parameters, as validated by a parity count with a live read-only query. - Then the report metadata echoes the exact time coverage, scope, and query parameters used.
Report Outputs: Signed PDFs and Machine-Readable Exports
- Given a report is generated, Then the system produces: a digitally signed PDF, a CSV, and a JSONL file, each accompanied by a SHA-256 checksum file (*.sha256). - Then each file embeds or accompanies metadata: time_coverage_start, time_coverage_end, query_parameters, event_count, dropped_event_count set to 0; if any drop occurs, dropped_event_count > 0 and report_completeness = "Incomplete". - Then the PDF signature validates using ECDSA P-256 or RSA-2048 with RFC 3161 timestamp; verification returns Valid. - Then optional tenant branding (logo/name) appears if enabled in tenant settings and is absent otherwise.
Chain of Custody and Digital Signature Footers
- Given a report is generated, Then the PDF contains a Chain of Custody section listing: generator service ID, signer identity, signature timestamp (UTC, ISO-8601 Z), key fingerprint, and any scheduled delivery handoffs with timestamps. - Then every PDF page footer displays: "Signed by <signer_identity> at <ISO-8601 UTC>". - When any byte of the PDF is modified, Then signature verification fails with an explicit "signature_invalid" error.
Scheduled Delivery, Encryption, and Password-Protected Archives
- Given an authorized user creates a schedule (daily/weekly/custom cron) and designates recipients, When the schedule triggers, Then the system delivers the report as an AES-256-encrypted ZIP over TLS 1.2+ to each authorized recipient. - Then recipients must supply a password meeting policy (≥12 chars, mixed case, number, symbol) to open the archive; incorrect passwords fail to open. - Then only recipients with appropriate roles within the tenant can be added; attempts to add unauthorized or external recipients are blocked with an error. - Then delivery attempts are retried with exponential backoff for up to 24 hours upon transient failure, and final status is recorded.
Tamper-Evident Logging of Report Events
- Given any report generation, download, schedule create/update, or delivery, Then an immutable ledger entry is appended containing: event_id, tenant_id, actor_id, action, timestamp (UTC, ISO-8601 Z), IP, user_agent, report_id, checksum. - Then attempts to modify or delete ledger entries are rejected and recorded as a separate security event. - Then ledger entries are searchable by report_id, actor_id, and time window with results returned in ≤2 seconds for ≤100k entries.
Verification Utility: Integrity Check Against Ledger
- Given valid report files (PDF, CSV, JSONL, checksums), When the verification utility runs, Then it validates checksums and signatures and cross-references the ledger, returning "Pass" within 5 seconds for reports ≤100 MB. - When any file is altered or the checksum does not match the ledger, Then the utility returns "Fail" with error_code and a human-readable reason. - When offline, Then the utility performs checksum/signature validation and returns "Partial" with a prompt to re-run online for ledger checks.
Tenant Data Residency and Role-Based Redactions
- Given a tenant configured for specific data residency regions, When a report is generated, Then only data stored within the tenant’s allowed regions is included; cross-region data access is blocked and logged. - Then role-based redaction rules are applied: fields marked sensitive (e.g., patient identifiers) are masked or omitted for roles below the configured threshold, and the report includes redaction_count in metadata. - Then attempting to bypass redaction (e.g., by switching output format) results in the same redactions across PDF/CSV/JSONL.
Role-based Audit Access Controls
"As a privacy officer, I want fine-grained, role-based access to audit data so that only authorized staff can view sensitive logs."
Description

Implement least-privilege, role-based access to audit data with fine-grained scoping by tenant/clinic and data domain. Provide predefined roles (Compliance Officer, Auditor Read-only, Clinic Admin, Therapist Limited, Support Scoped) and support custom roles. Enforce strong auth (SSO/SAML/OIDC), MFA for audit access, session timeout, and just-in-time elevation with approval and justification capture. Mask or tokenize sensitive fields by role, with inline on-demand reveal logged. Prevent self-audit tampering: audit viewers cannot edit or purge audit data. All access to the Audit Guard UI and APIs is itself audited.

Acceptance Criteria
Enforce Predefined Roles and Least-Privilege
Given users assigned roles Compliance Officer, Auditor Read-only, Clinic Admin, Therapist Limited, and Support Scoped When each user attempts to access the Audit Guard UI and APIs Then Compliance Officer can view and export audit data for all clinics in their assigned tenant and cannot modify or delete audit records And Auditor Read-only can view and export audit data for their assigned tenant and cannot modify settings, roles, or audit records And Clinic Admin can view and export audit data only for their own clinic and is denied access (403) to other clinics or tenants And Therapist Limited is denied (403) all Audit Guard UI pages and API endpoints And Support Scoped is denied (403) access unless a just-in-time elevation is active
Tenant/Clinic and Data Domain Scoping
Given two tenants T1 and T2 with clinics C1 (T1) and C2 (T2), and data domains AccessLogs and ExportEvents When a Clinic Admin of C1 queries, filters, searches, and exports audit data Then only records for C1 in allowed domains are returned and included in exports, with zero records from C2 or T2 And cross-tenant or cross-clinic queries return 403 or empty result per policy with an audit entry of the denied attempt And specifying disallowed domain parameters (e.g., ExportEvents when not permitted) returns 403 and is audited
Custom Role Creation and Enforcement
Given a Compliance Officer creates a custom role "Auditor-Clinic-ExportOnly" with permissions: read ExportEvents for clinic C1 only, no AccessLogs, export allowed for ExportEvents only When the role is assigned to a user and the user accesses Audit Guard Then the user can view and export ExportEvents for C1 and cannot view AccessLogs or any data for other clinics or tenants (403) And export files contain only C1 ExportEvents with record counts matching the on-screen results And attempts to modify roles, change masking defaults, or access disallowed domains return 403 and are audited
Strong Auth, MFA, and Session Timeout for Audit Access
Given SSO via SAML/OIDC is configured and MFA is required for Audit Guard When a user signs in via SSO without MFA and navigates to Audit Guard Then the user is prompted for MFA and is denied API access (401/403) until MFA is completed When the user completes MFA and accesses Audit Guard Then access is granted and the MFA method and assurance level are recorded in the audit log When the Audit Guard session is idle for the configured inactivity timeout (e.g., 15 minutes) Then the session is locked and re-authentication with MFA is required to continue And API tokens expire accordingly; requests with expired tokens return 401 and are audited And local username/password login to Audit Guard endpoints is disabled (SSO-only)
Just-in-Time Elevation with Approval and Justification
Given a Support Scoped user requests just-in-time (JIT) elevation for tenant T1 clinic C1 with a written justification When a Compliance Officer reviews and approves the request for a fixed duration (e.g., 60 minutes) Then the Support Scoped user gains time-bound read access only to the approved scope and domains, visible in UI and APIs And the elevation start, approver, requester, justification, scope, and expiry are captured in the audit log When the elevation expires or is revoked Then the user's access is automatically terminated and subsequent requests return 403 and are audited And JIT requests without justification or without approval cannot activate elevation
Sensitive Field Masking and On-Demand Reveal Logging
Given fields labeled as sensitive (e.g., patient_identifier, device_id, IP_geo_detail) are present in audit records When an Auditor Read-only or Clinic Admin views audit data Then sensitive fields are masked or tokenized by default in UI, API, and exports When a permitted role (e.g., Compliance Officer) performs an inline reveal Then the user must provide a justification, the reveal is time-limited (e.g., 60 seconds), and a reveal event is logged with who, when, what field, record ID, and reason And roles without reveal permission cannot reveal (UI control hidden/disabled and API returns 403) And exports by roles without reveal-on-export permission include masked values
Audit Integrity and Self-Access Logging
Given any user with access to Audit Guard When the user attempts to edit, purge, or delete audit records via UI or API Then the action is blocked (403/405) and a tamper-attempt event is recorded with user, role, IP, and endpoint When any user views, searches, filters, reveals fields, or exports audit data Then an audit trail entry is created capturing user, role, tenant/clinic scope, action type, resource, timestamp, IP, user-agent, MFA status, session ID, and result And each export generates a unique export ID with record count and checksum that can be queried in the audit log And no assignable human role can bypass integrity protections or alter existing audit entries
Retention, Archival & Legal Hold
"As a data protection officer, I want configurable retention and legal hold controls so that we meet regulatory obligations and preserve evidence when required."
Description

Provide configurable retention policies per tenant for audit data (e.g., 6–10 years), with automated transition to cost-efficient, immutable archival storage and verifiable purge at end-of-life. Support legal hold to immediately suspend purging for specified scopes (user, patient, clinic, date range) with documented rationale and approver. Expose policy versions and effective dates, surface upcoming purges, and send pre-expiry notifications. Ensure residency-aware storage (e.g., US/EU), periodic recoverability testing, and evidence of retention compliance in reports. All policy changes, holds, and purges are audited with before/after snapshots.

Acceptance Criteria
Tenant Retention Policy Configuration & Versioning
Given a tenant admin with Manage Retention Policies permission When they create or update the tenant’s audit-data retention policy with a duration within the allowed range (e.g., 6–10 years) and save Then the system persists the change as a new policy version with a unique version ID and effectiveAt timestamp And the prior version is retained read-only and visible in version history And attempts to set a duration outside the allowed range are rejected with a validation error And the Policies API and Admin UI return the new current version and full version history immediately after save And the policy change is audited with a before/after snapshot including actor, timestamp, IP, and change rationale
Automated Archival to Immutable Storage
Given audit records whose age has crossed the configured archival threshold but not the retention end date When the scheduled archival job runs Then those records transition to a cost-efficient archival storage class in the tenant’s residency region And an immutable (WORM) lock is applied through each record’s retention end date And attempts to modify or delete archived records are blocked by the storage provider And the archival job log records counts moved, region, storage class, and success/failure per batch; failures are retried and alerted And archived-object metadata includes retention-until date and region tag
End-of-Life Purge with Verifiable Proof
Given audit records whose retention end date has passed and that are not under legal hold When the scheduled purge job runs Then the records are permanently removed from online stores, archival storage, indexes, and caches And a purge manifest is generated containing record IDs or ranges, counts, executor, timestamp, region, and a cryptographic hash of the manifest And a verification task immediately attempts reads on a randomized sample and receives “not found” And the purge event is audited with a before/after snapshot of the policy context and scopes affected And a purge completion notification is delivered to tenant compliance contacts
Legal Hold Application & Release
Given a compliance officer applies a legal hold scoped by user, patient, clinic, and/or date range with rationale and approver When the hold is saved Then all matching records become purge-ineligible before the next purge cycle And the hold record stores scope, rationale, approver identity, actor, timestamps, and optional expiration And any purge attempt that targets on-hold records is blocked and generates an alert And releasing the hold requires recording a release rationale and approver; release is audited with before/after snapshots
Residency-Aware Storage Enforcement
Given a tenant residency setting of US or EU When audit data is written, archived, backed up, or restored Then all storage locations used reside within the tenant’s residency region And cross-region replication and transfers for audit data are prevented And each stored object includes region metadata that is reportable via API And any attempted cross-region operation is blocked and logged as a security event
Pre-Expiry Notifications & Upcoming Purge Surfacing
Given audit records whose retention end date falls within the tenant-configured notification window When the daily notification job runs Then designated tenant contacts receive a pre-expiry notification listing counts, scopes, earliest purge date, and a link to place a legal hold And the Admin UI shows an Upcoming Purges view with filters (scope, date range, residency) and CSV export And notification delivery outcomes (sent, bounced, retried) are logged; failures generate an alert And items are removed from Upcoming Purges once purged or placed on hold
Periodic Recoverability Testing & Compliance Reporting
Given a scheduled recoverability test period When the recoverability test executes Then a sample of archived audit records is restored to a non-production environment and validated for integrity (hash matches) and schema completeness And recovery RTO/RPO metrics are captured and compared to configured thresholds with pass/fail results And a signed compliance report is generated that includes policy versions and effective dates, active holds, executed purges with manifests, residency evidence, archival immutability settings, and test outcomes And the report is downloadable via UI and API; all report generation and access events are audited

Coach Mode

A caregiver-only view that surfaces real-time form flags, step-by-step cues, and streak progress—without exposing diagnoses or full charts. Empowers family helpers to coach confidently while keeping medical details private.

Requirements

Caregiver Invite & Consent Linking
"As a patient, I want to securely invite a family caregiver and control what they can see so that I get help during home exercises without exposing my private medical information."
Description

Provide a secure, patient-controlled flow to invite a caregiver to Coach Mode via SMS/email or shareable code, bind the caregiver to the patient profile, and granularly scope what the caregiver can access (exercise list, cues, rep counts, form flags, adherence summary) without exposing diagnoses, clinician notes, or PHI. Includes consent capture, expiration, revocation, multi-caregiver support, and audit logging. Integrates with MoveMate identity, roles/permissions, and clinic roster APIs to ensure least-privilege access and traceability across mobile and web surfaces.

Acceptance Criteria
SMS Invite with Patient Consent
- Given an authenticated patient on MoveMate, When they choose "Invite Caregiver" via SMS and select one or more access scopes, Then the system generates a single-use invite link with a signed token that encodes patient ID, selected scopes, and a configurable TTL (default 72h) and sends it via SMS to the entered number. - When the caregiver opens the link, Then they must verify ownership of the destination phone via OTP and accept the presented consent summary; upon success, Then a consent record is created with patient ID, caregiver identity, scopes, issued_at, expires_at, inviter, channel=SMS. - Then the caregiver is assigned the caregiver role and is bound to the patient profile; Coach Mode becomes available within 30 seconds on mobile and web. - When the link is used a second time or after TTL expiry, Then redemption is rejected with error invite_expired_or_used, no binding occurs, and the event is logged.
Shareable Code Invite with Pre-Selected Scopes
- Given an authenticated patient, When they generate a shareable code for Coach Mode and select scopes, Then the app displays a one-time alphanumeric code (8–12 chars) with a configurable TTL (default 15m) and optional label for the caregiver. - When a caregiver enters the code on mobile or web and authenticates (email or phone OTP), Then the system binds the caregiver to the patient with the pre-selected scopes and records consent; the code is invalidated immediately after first successful use. - When the code TTL expires or the code is entered incorrectly more than 5 times within 10 minutes, Then further attempts are blocked and logged, and no access is granted. - Then the patient receives an in-app confirmation that shows caregiver display name/label, scopes granted, and expiration date.
Scope Enforcement and PHI Redaction in Coach Mode
- Given a caregiver with an active, scoped consent, When they access Coach Mode for the linked patient on mobile or web, Then they can only view the following data within UI and API: exercise list, step-by-step cues, rep counts, form flags, and adherence summary. - Then the following are not accessible and not returned by any API: diagnoses (codes/descriptions), clinician notes, and PHI/PII fields (DOB, address, phone, email, insurance, MRN, full chart documents); attempts return HTTP 403 with error insufficient_scope and are logged. - Then backend authorization uses MoveMate roles/permissions such that caregiver role cannot access any clinic roster endpoints and does not appear as clinic staff; least-privilege is enforced on every request. - Then payloads in real-time events and push notifications to caregivers contain no PHI/PII and include only permitted fields per granted scopes.
Consent Expiration and Automatic Access Revocation
- Given a caregiver consent with an expires_at timestamp, When current time passes expires_at, Then the caregiver’s access to the patient is revoked within 5 minutes across all sessions and devices. - Then any active caregiver API calls after expiration receive HTTP 401/403 with error consent_expired, and Coach Mode UI shows an access expired message with re-invite instructions. - Then both patient and caregiver receive a notification of expiration (in-app and email/SMS channel matching original invite), and the audit log records the auto-revocation.
Patient-Initiated Revocation
- Given a patient with one or more active caregiver links, When the patient revokes a specific caregiver from Settings > Coach Mode Access, Then the binding is removed and sessions for that caregiver are invalidated within 60 seconds. - Then the caregiver immediately loses access to the patient context; subsequent requests return HTTP 403 with error access_revoked. - Then the system writes an audit entry (actor=patient, action=revoke, caregiver_id, patient_id, timestamp, reason optional) and sends a notification to the caregiver confirming revocation.
Multi-Caregiver Management
- Given a patient, When they manage caregivers, Then the system supports at least 5 concurrent active caregivers per patient without conflict. - Then scope changes, expirations, or revocations applied to one caregiver do not affect others. - Then the patient can view a list of caregivers with status badges (Pending, Active, Expired, Revoked), last access time, scopes, and can edit scopes or revoke per caregiver.
Audit Logging and Traceability
- Given system activity related to caregiver access, Then the following events are logged with immutable, queryable records: invite_created, code_generated, consent_accepted, scope_updated, access_revoked, consent_expired, caregiver_login, caregiver_logout, data_read (by resource), access_denied. - Then each log entry includes timestamp (UTC ISO 8601), actor_type and actor_id, caregiver_id, patient_id, surface (iOS/Android/Web), IP/device ID, scopes, request_id, outcome (success/failure), and error code if applicable. - Then authorized clinic admins can export logs for a patient over a date range via admin API and filter by event type; retention is at least 1 year.
Privacy‑Scoped Data View
"As a caregiver, I want a view that only shows the exercise plan, reps, and form alerts so that I can coach effectively without needing clinical details."
Description

Implement a caregiver-only UI that redacts PHI and clinical details while surfacing the minimum necessary data for coaching: current exercise step, live rep counts, form flags, and simple adherence indicators. Enforce server-side filtering and token-scoped queries so restricted fields cannot be fetched. Add visual privacy affordances (privacy badge, data scope tooltip) and safeguard screenshots/screen sharing where supported. Log all access for compliance and provide admin controls for default scopes at clinic level.

Acceptance Criteria
Caregiver Views Coach Mode With PHI Redacted
Given I am a caregiver logged in with a valid caregiver token And I open Coach Mode for a specific patient When the Coach Mode screen loads Then the UI must not display PHI or clinical details, including full name, date of birth, address, phone, email, diagnoses, medications, clinician notes, visit history, MRN, or insurance And the patient identifier is limited to an alias or first-initial format And any redacted sections show a “Hidden in Caregiver View” placeholder
Server-Side Field Filtering and Token-Scoped Queries
Given a caregiver access token with scope coach:read-minimum When the client requests patient, exercise, or session endpoints Then responses include only whitelisted fields: current_exercise_step, live_rep_count, form_flags, adherence_indicator, patient_alias And restricted fields are omitted from payloads (not returned as null) And adding query parameters for restricted fields does not return those fields And direct requests to restricted endpoints or fields return HTTP 403 with error_code "scope_insufficient" And all such denials are logged with request_id and user_id
Real-Time Coaching Data Only (Step, Reps, Flags, Adherence)
Given an active exercise session is in progress When the caregiver is viewing Coach Mode Then the current exercise step text is visible And live rep count updates within 1 second of each detected rep under normal network conditions And form flags appear within 1 second of detection with a brief cue message And an adherence indicator (e.g., streak days or weekly completion %) is displayed And no additional clinical metrics (e.g., diagnosis text, pain scores, visit notes, ROM history) are shown
Privacy UI Affordances Displayed in Coach Mode
Given Coach Mode is open When the screen renders the header Then a visible Privacy Badge labeled "Caregiver View" is present And a Data Scope tooltip is accessible via tap or focus that explains the minimal data shown (step, reps, flags, adherence) And the badge and tooltip appear consistently on all Coach Mode screens and modals And the badge cannot be dismissed or hidden during the session
Screenshot and Screen-Sharing Safeguards
Given I am using an Android device When Coach Mode is active Then screenshots and screen recordings are blocked using FLAG_SECURE Given I am using an iOS device When system screen recording or AirPlay is detected while Coach Mode is active Then sensitive regions (anything beyond step, reps, flags, adherence, alias) are masked and a persistent privacy banner is shown And the event is logged with user_id and device info
Access Logging and Audit Readiness
Given any caregiver access to Coach Mode data occurs When API responses are served Then an audit log entry is written including timestamp, user_id, patient_id (hashed or pseudonymized), route, scope, fields returned, IP/device, and request_id And logs are immutable and retained per clinic retention policy (default 6 years) And authorized compliance admins can query logs by user_id, patient_id, date range, and outcome
Clinic Admin Configures Default Caregiver Scopes
Given I am a clinic admin When I open Clinic Settings > Coach Mode Defaults Then I can enable/disable visibility of allowed data elements: current step, live reps, form flags, adherence indicator, patient alias And PHI/clinical details remain non-configurable and always redacted And changes save successfully and are audit logged And new caregiver links inherit the updated defaults immediately And existing caregiver sessions pick up changes on next refresh or within 5 minutes, whichever comes first
Real‑time Form Flag Streaming
"As a caregiver, I want to receive real-time rep counts and form flags during a session so that I can correct form immediately."
Description

Stream computer‑vision events (rep detections, form deviations, posture warnings) from the patient device to the caregiver’s Coach Mode in near real time with sub‑second perceived latency. Use resilient WebSocket transport with backoff, message ordering, and replay for brief disconnects. Define a compact, versioned event schema, client-side thresholding to reduce noise, and fallback to periodic summaries when bandwidth is constrained. Surface clear, actionable flags and synchronize timestamps with the patient session timeline for accurate coaching.

Acceptance Criteria
Sub-Second Real-Time Streaming of Form Events
Given a patient session is active and a caregiver is viewing Coach Mode on a stable network (≥1 Mbps, RTT ≤150 ms) When a rep detection, form deviation, or posture warning is produced on the patient device Then the corresponding event renders in Coach Mode within 800 ms end-to-end at the 95th percentile and within 300 ms at the median Given transient network variability When latency thresholds are exceeded for >5 seconds Then the UI displays a "degraded" badge within 2 seconds while continuing to render events in arrival order
Resilient WebSocket with Backoff, Ordering, and Replay
Given an established WebSocket stream When a disconnect of ≤10 seconds occurs Then the client attempts exponential backoff reconnect starting at 500 ms with a cap of 2 seconds and resubscribes preserving session context Given messages carry monotonically increasing sequence numbers When the caregiver reconnects Then any missing sequences are replayed within 2 seconds of reconnection without duplicates Given out-of-order or duplicate deliveries When processing events Then Coach Mode reorders by sequence and discards duplicates so the UI sequence strictly increases Given a sustained outage >60 seconds When streaming cannot be restored Then the system enters summary mode and displays a "summary mode" banner within 3 seconds
Compact, Versioned, Privacy-Safe Event Schema
Given the event schema v1.x When emitting events Then each event is ≤400 bytes on average and includes only: version, session_id, exercise_id, event_type [rep|form_deviation|posture_warning], severity, confidence, sequence, event_ts, device_ts, cue_id, and minimal parameters; no diagnosis, notes, or patient identifiers are present Given a client on schema v1.0 receiving events v1.1 with additive fields When parsing Then unknown fields are ignored without failure and required fields validate against the JSON Schema Given an event containing prohibited fields (diagnosis, chart notes, PHI) When validated server-side Then the event is rejected with 422 and is not forwarded to Coach Mode
Client-Side Thresholding and Noise Reduction
Given exercise-specific thresholds configured by the clinician When form deviation confidence is below threshold or duration <250 ms Then no flag event is emitted Given posture fluctuations When generating posture_warning events Then rate limiting ensures ≤1 posture_warning per 2 seconds per exercise unless severity is High Given rep detection evaluation on the validation set When thresholds are applied Then precision ≥0.95 and false positive rate ≤1 per 5 minutes
Fallback to Periodic Summaries Under Constrained Bandwidth
Given measured link capacity <50 kbps sustained for 10 seconds or RTT >1 s for 10 seconds When streaming is active Then the system switches to summary mode within 3 seconds and reduces transmissions to 1 summary every 5 seconds Given summary mode When sending a summary Then it includes time window, total reps, count of each flag type by severity, top 3 actionable cues, and last received sequence number, with payload ≤600 bytes Given network health recovers (capacity ≥200 kbps and RTT ≤200 ms for 30 seconds) When in summary mode Then the system resumes real-time streaming and reconciles counts by replaying any missing events
Actionable, Privacy-Preserving Flag Presentation in Coach Mode
Given a form deviation event arrives When rendering the UI Then Coach Mode displays a concise cue (≤90 characters), severity color, and a single next-step tip mapped by cue_id within 200 ms of receipt Given the caregiver-only view When showing events Then no diagnosis names, chart notes, or patient identifiers are visible; only exercise name, cue text, severity, and streak progress are displayed Given accessibility settings are enabled When a High severity warning arrives Then visual contrast is ≥4.5:1 and optional haptic/audio alert triggers if toggled on
Timestamp Synchronization with Patient Session Timeline
Given session start When establishing the stream Then a clock sync handshake computes offset and jitter to align event_ts to the session timeline within ±100 ms Given drift accumulates during the session When the offset exceeds 50 ms Then resync occurs within 5 seconds without interrupting the stream and timestamps update smoothly without visible backward jumps Given the caregiver scrubs the session timeline When cross-validating against rep markers Then events align within ±100 ms for ≥95% of events
Step‑by‑Step Cues & Auto‑Pause Controls
"As a caregiver, I want step-by-step visual and audio cues I can trigger or replay so that I can guide the patient confidently."
Description

Present concise, stepwise exercise instructions with tappable cues (repeat, slow down, next step) and optional text-to-speech, haptics, and large-type accessibility. Allow Coach Mode to trigger a patient-side instruction replay or auto‑pause when critical form deviations occur, with clear resume controls. Localize cue content and respect clinician-prescribed variations. Cache cue assets offline and maintain parity with the exercise library so updates propagate without app releases.

Acceptance Criteria
Tappable Stepwise Cues During Active Exercise
Given a patient is performing a prescribed exercise in Coach Mode When the step instruction is displayed Then the cues “Repeat”, “Slow down”, and “Next step” are visible, tappable, and labeled in the user’s locale Given the coach or patient taps “Repeat” When the tap is registered Then the current step’s instruction is replayed within 300 ms and a replay event is logged with timestamp and step ID Given the coach taps “Slow down” When the tap is registered Then the app displays a clear slowdown cue to the patient within 300 ms and reduces pace guidance by at least 20% for the duration of the step Given the coach taps “Next step” When the tap is registered Then the app advances to the next step within 300 ms and the new step index is announced per enabled modalities (visual/TTS/haptics)
Coach-Triggered Instruction Replay Without PHI Exposure
Given Coach Mode is active and the coach requests an instruction replay When the replay is triggered from the coach view Then the patient device replays only the current step’s instructions and does not display diagnoses, notes, or chart data in either view Given a replay action occurs When the event is logged Then the log contains session ID, exercise ID, step ID, actor role (coach), and timestamp, but no protected health information (PHI) Given the patient device is locked or backgrounded When the coach triggers replay Then the action is queued and executes within 2 seconds of the patient app returning to foreground, or the coach is shown a non-PHI error if the session has ended
Auto‑Pause on Critical Form Deviation with Clear Resume Controls
Given computer vision flags a critical form deviation during a step When the severity is marked critical Then the exercise auto‑pauses within 500 ms, the on‑screen overlay states the deviation in lay terms, and the coach view shows an alert without PHI Given the exercise is auto‑paused When the user chooses Resume Then a 3‑second countdown is displayed with haptic tick (if enabled) and the exercise resumes from the same step and rep Given multiple deviations occur while paused When resume is tapped Then only the most recent deviation note is shown and older ones are accessible via a non-PHI “See details” link Given a non‑critical deviation is detected When severity is not critical Then auto‑pause does not trigger and only guidance cues are shown
Accessibility: Text‑to‑Speech, Haptics, and Large‑Type Controls
Given accessibility options are available for a patient profile When TTS is enabled Then each step instruction is spoken within 300 ms of step start and respects the selected voice and locale Given haptics are enabled and supported by the device When a cue (repeat/slow down/next step) is triggered Then a distinct haptic pattern plays within 200 ms; if unsupported, no error is shown and visual cues persist Given large‑type mode is enabled When step instructions are displayed Then font size is at least 18 pt (or platform equivalent) with a contrast ratio of ≥ 4.5:1 Given accessibility settings are changed during a session When toggled by coach or patient (per permission) Then changes take effect on the next step transition and persist for subsequent sessions
Localization and Clinician‑Prescribed Variations Compliance
Given the device locale is supported (e.g., es‑ES) When an exercise with clinician‑prescribed step variations is started Then all cues and step text are shown in the device locale with the prescribed variations applied, and no mixed-language strings appear Given the device locale is unsupported When the exercise starts Then cues fall back to English (en‑US) and a one‑time non-blocking notice indicates the fallback Given a clinician prescribes a custom cue (e.g., “hold 2s at top”) When the exercise runs Then the custom cue appears at the correct step and timing and is included in TTS and haptics if enabled
Offline Caching of Cue Assets with Graceful Degradation
Given a patient has a scheduled exercise session When the app has network connectivity prior to the session Then all required cue assets (text, icons, localized strings, and pre-rendered TTS where applicable) are cached to device storage before session start Given the network becomes unavailable during a session When cues need to be presented Then cues render from cache without errors, TTS uses cached audio or on-device synthesis, and a lightweight “Offline” indicator is shown Given cache validation occurs daily When stale or partial assets are detected Then the app re-fetches only missing or outdated assets on next connectivity without blocking the user
Exercise Library Parity and Update Propagation Without App Release
Given the exercise library updates cue content on the server When the app next syncs (at launch or within 1 hour during use) Then the updated cue content and metadata are pulled and used without requiring an app update Given a cue content update is received When versioning metadata is applied Then both coach and patient views present the same content version within the session, and the version ID is included in analytics logs Given an update payload fails integrity checks When the app validates checksums Then the app retains the last known-good version, logs the failure, and retries on next sync
Streak & Progress Snapshot
"As a caregiver, I want a simple progress snapshot like streaks and adherence so that I can encourage consistency."
Description

Display a lightweight adherence summary tailored for caregivers: active streak, completion percentage for assigned sessions, last session date, and recent achievements. Exclude sensitive clinical metrics while keeping terminology layperson-friendly. Handle timezone differences, daylight savings, and partial completions consistently with patient dashboards. Support tap-through to session history limited to non-PHI data and ensure numbers reconcile with clinician dashboards to build trust.

Acceptance Criteria
Coach Mode Snapshot: Core Fields Rendering
Given a caregiver is signed into Coach Mode and opens a patient's Streak & Progress Snapshot When the snapshot loads Then it displays exactly these elements: Active streak (whole days), Completion percentage (current adherence period), Last session date (calendar date), and up to three Recent achievements with friendly names/icons And no additional clinical metrics or charts are shown in this snapshot And labels use layperson-friendly terms: "Active streak", "Completion", "Last session", "Achievements" And formats are: streak as integer (no decimals), completion as whole percent 0–100 with standard rounding, date as "MMM D, YYYY"
Privacy Guardrails for Snapshot
Rule: Snapshot must not display or link to diagnoses, clinician notes, charts, vitals, targets, pain scores, DOB, MRN, insurance, contact info, or clinic identifiers Rule: Permitted data are limited to: streak days, completion %, last session calendar date, achievement names/icons, and link to non-PHI session history only Rule: Coach Mode API responses for this view exclude PHI fields; attempts to query PHI are rejected (HTTP 403) or filtered server-side Test: Manual and automated UI scans detect zero PHI strings/values in snapshot and linked session history
Timezone and DST Consistent Calculations
Given a patient primary timezone TZp and caregiver local timezone TZc (which may differ) When computing active streak, completion %, and last session date Then all calculations are performed in TZp, matching patient and clinician dashboards And the displayed last session is the calendar date in TZp (formatted "MMM D, YYYY") And unit tests cover events within ±2 hours of midnight and across DST transitions (start/end) for multiple regions; no off-by-one-day errors And all timestamps are stored in UTC and converted using TZp for calculations
Partial Completions Alignment
Given sessions with partial completion outcomes across a date range When computing the completion percentage for the assigned sessions Then the adherence policy and counting logic are identical to the clinician dashboard And completion % equals the clinician dashboard value for the same dataset and sync timestamp (tolerance: 0%) And rounding is to nearest whole percent with .5 rounding up And automated tests cover 0%, 33%, 66%, 100%, and mixed partials; all assertions pass
Tap-through to Non-PHI Session History
Given a caregiver taps the snapshot area When the session history screen opens Then it lists sessions with only: session calendar date (TZp), total reps, form-flag count, and a duration bucket (Short/Medium/Long), plus achievements earned if any And it does not show: diagnoses, clinician names, notes, exercise targets, pain scores, range-of-motion values, media, or any patient identifiers And no navigation path from session history reveals PHI; restricted routes show "Not available in Coach Mode" And Back returns to the snapshot without data loss
Reconciliation with Clinician Dashboard Numbers
Given the clinician dashboard and Coach Mode snapshot are viewed within the same data freshness window (≤ 60 seconds) When comparing Active streak, Completion %, and Last session date for the same patient Then values match exactly (0 difference), excluding up to 60 seconds of display latency after a new session sync And any mismatch triggers a telemetry event logging both values and an anonymized patient ID for investigation
Layperson-Friendly Copy and Labels
Rule: All visible labels and helper texts in snapshot and session history achieve Flesch-Kincaid Grade ≤ 8.0 Rule: No medical jargon or abbreviations appear (e.g., no "ROM", "Dx", "HHD"); approved terms list enforced by linting Test: Empty states and tooltips use plain-language guidance and a clear call-to-action Test: Content review checklist signed off by UX/copy with samples in production strings
Session Handoff & Join Flow
"As a clinician, I want a simple handoff flow that lets me start or schedule a caregiver coaching session so that setup takes minimal time and reduces support burden."
Description

Enable low-friction session start and join: clinician can schedule or start a Coach Mode session and send a deep link or short join code; caregiver can join in one tap without accessing the full chart. Use short‑lived, single‑scope tokens for session access, device compatibility checks (camera/mic permissions, network), and clear state indicators (waiting, live, ended). Support both synchronous coaching and asynchronous review windows with appropriate access durations and notifications.

Acceptance Criteria
One‑Tap Deep Link Join — Synchronous Coach Mode
Given a clinician schedules or starts a Coach Mode session and sends a deep link And the deep link token is single-scope to the session with a 30‑minute TTL When the caregiver taps the link on a compatible device Then the app opens directly to the session’s Coach Mode pre-join screen without login or chart access And upon passing pre-join checks, the caregiver enters the Waiting state if the clinician has not started the session And if the clinician is already live, the caregiver joins the Live state within 3 seconds of passing checks And if the link is expired or already used after the session ends, the user sees an Expired Link message with a one-tap request-new-link action And no PHI (diagnoses, full chart, notes) is displayed at any point in this flow
Short Join Code Entry — Synchronous Coach Mode
Given a clinician creates a short join code for a scheduled or live session And the code is 6 characters, case-insensitive, and valid for 10 minutes from issuance When the caregiver enters the code on the Join screen Then the code is verified within 2 seconds and the caregiver advances to the pre-join checks And invalid or expired codes show a neutral error without revealing session existence And entry attempts are rate-limited to 5 per minute per device/IP with a 60‑second cool-off on exceeding the limit And once the session ends, the code becomes invalid immediately
Pre‑Join Device & Network Compatibility Checks
Given a caregiver initiates join via deep link or code When the pre-join flow runs device checks Then camera and microphone permissions are requested if not granted, with clear prompts And the user cannot proceed to Live state until camera and mic are both allowed And a network test confirms ≥300 kbps upstream and <250 ms latency; otherwise, a guidance screen with retry is shown And the compatibility check completes in ≤5 seconds on a healthy connection And failures are logged client- and server-side with non-PII diagnostics
Session State Indicators & Transitions
Given a Coach Mode session exists When the caregiver opens the session before the clinician goes live Then the UI shows “Waiting” with explanatory text and join time When the clinician starts the session Then the caregiver’s state updates to “LIVE” within 3 seconds and real-time cues/flags are visible When the clinician ends the session or the session times out after 5 minutes of no participants Then the caregiver sees “Ended” with timestamp and an option to view the asynchronous review (if enabled) And re-entering a ended session does not restore live media or cues
Single‑Scope Tokens & Privacy Boundaries
Given session access uses short‑lived, single‑scope tokens When a caregiver uses a token to access endpoints Then only Coach Mode session resources (real-time flags, cues, rep counts, session state) are accessible And attempts to access patient chart, diagnoses, notes, or other sessions return 403 and are audited And live join tokens expire at 30 minutes after issuance or immediately upon session end, whichever comes first, with ±2 minutes clock skew tolerance And tokens are non-refreshable and are revoked on clinician end, link revocation, or suspected abuse And token payload excludes PHI and includes audience "coach-session", sessionId, scope, and exp
Asynchronous Review Window & Notifications
Given a clinician enables an asynchronous review window for a session When the live session ends Then the caregiver may access a read-only review for 24 hours containing rep totals, flagged form events, and streak progress only (no video or chart) And access beyond 24 hours returns an Expired Review message with a request-new-access workflow And the caregiver receives a notification at window open and a reminder at T+12h if not viewed And the clinician receives a joined/first-view notification and a summary of caregiver engagement after the window closes

Cue Tuner

Personalize real-time coaching cues to fit each patient’s needs. Adjust visual overlays (size, contrast, color‑blind palettes), pick voice vs. beep prompts, and set haptic strength and frequency. Clinicians can save defaults per protocol and patient profile to reduce cognitive load, boost comprehension, and keep focus on safe movement.

Requirements

Adaptive Visual Overlays
"As a clinician, I want to customize visual overlays for each exercise so that patients can clearly see and understand real-time form guidance regardless of their visual needs."
Description

Provide fine-grained controls to personalize on-screen coaching elements, including overlay size, thickness, position, opacity, contrast, and selectable color‑blind–safe palettes. Support presets for high-contrast and low-vision modes, adjustable target zones, and error highlight styles. Integrate with the computer-vision pipeline to ensure overlays align with detected joints and movement landmarks in real time without adding perceptible latency. Persist selections per exercise and session, and ensure the UI is accessible (screen reader labels, large touch targets) to reduce cognitive load for clinicians and improve comprehension for patients.

Acceptance Criteria
Real-time Overlay Adjustment During Live Session
Given a live exercise session with computer-vision tracking active When the clinician adjusts overlay size, thickness, position, or opacity via controls Then the overlay updates on screen within 100 ms of the input And the session frame rate remains ≥ 30 FPS And the overlay remains anchored to the tracked joints with ≤ 5 px drift during the adjustment
Color‑Blind Safe Palette Selection
Given the clinician selects a color-blind–safe palette (deuteranopia, protanopia, or tritanopia) When the palette is applied Then all overlay elements (joints, target zones, error highlights) are mutually distinguishable under the corresponding color‑blind simulation And text/label contrast against background is ≥ 4.5:1 (WCAG AA) And the selection is applied immediately without requiring a session reload
High-Contrast and Low‑Vision Presets
High Contrast preset enforces: overlay contrast ratios ≥ 7:1, line thickness ≥ 4 px, and label font size ≥ 18 pt Low Vision preset enforces: overlay element sizes increased by ≥ 30% and interactive touch targets ≥ 44×44 pt Applying either preset takes effect within 200 ms and is reversible via Reset
Pose‑Aligned Overlays With Minimal Latency
Given pose landmarks are updating at ≥ 30 Hz When overlays render against the latest landmarks Then average positional error between overlay anchors and CV landmarks is ≤ 5 px at 1080p And overlay update latency is ≤ 1 frame (≤ 16 ms at 60 fps) relative to landmark updates And overlay jitter after smoothing is ≤ 2 px RMS over a 3‑second window
Configurable Target Zones and Error Highlights
Given a clinician defines a target zone by drag handles or numeric entry When a tracked joint deviates beyond the configured tolerance Then the chosen error highlight style (glow/outline/shade) appears within 200 ms and persists for ≥ 1 s And the highlight clears within 200 ms when the joint returns within tolerance And target zones can be positioned and resized with 1 px resolution and optional 5 px snap increments
Per‑Exercise and Per‑Session Persistence
Given a patient with multiple exercises When the clinician customizes overlays for Exercise A and ends the session Then reopening Exercise A for that patient restores 100% of the customized overlay settings And Exercise B retains its distinct settings with no bleed‑over And activating Reset to Protocol Defaults restores protocol‑level settings within 1 s
Accessible Controls for Overlay Customization
All controls expose accessible names, roles, and current values to VoiceOver and TalkBack Touch targets are ≥ 44×44 dp with ≥ 8 dp spacing Focus order matches visual order without traps Sliders/steppers are operable via keyboard and switch control with ≤ 5% value change per action Each adjustment announces the new value via screen reader feedback
Audio Cue Selector & TTS
"As a patient, I want to choose between voice guidance and simple beeps so that the cues fit my preferences and environment."
Description

Enable configuration of auditory guidance, allowing users to choose between voice prompts (text-to-speech) and tone/beep patterns. Include controls for language, voice, speech rate, volume, and tone frequency, with per-exercise phrase templates (e.g., “keep knees over toes”). Pre-cache common prompts for offline use to prevent lag, apply noise-gating to improve clarity, and synchronize cues with rep detection events to maintain sub-150 ms perceived latency. Provide a testing panel to preview cues and confirm device audio routing (speaker vs. headphones).

Acceptance Criteria
Mode Selection: Voice vs. Tone
Given a clinician is configuring Cue Tuner for a patient and exercise When they open the Audio Cue Selector Then they can choose between Voice and Tone modes And the selected mode is saved per patient profile and per exercise And the selection is reflected immediately in preview playback And the selection persists after app relaunch
Audio Parameter Configuration Controls
Given Voice mode When the clinician adjusts language, voice, speech rate (0.5x–2.0x), and volume (0–100%) Then preview and subsequent prompts use those settings And available languages/voices list matches OS-installed TTS voices And speech rate and volume changes are applied within 200 ms Given Tone mode When the clinician adjusts tone frequency (200–4000 Hz), pattern (single, double, triplet), and volume (0–100%) Then preview and subsequent tones reflect those settings And frequency accuracy is within ±5 Hz
Per-Exercise Phrase Templates
Given Voice mode and an exercise is selected When the clinician edits the phrase template for that exercise Then the template accepts at least 100 UTF-8 characters And the template is saved per exercise and can be set as a protocol or patient default And TTS preview speaks the template using current voice settings And reverting to default restores the protocol-level template
Pre-cached Prompts for Offline Use
Given a set of enabled phrase templates When the clinician initiates pre-caching or autosave occurs Then the app generates and stores audio for the top 20 prompts per patient across the assigned exercise list And a cache status indicator shows progress and success/fail per prompt And with the device in Airplane mode, cached prompts play with onset latency ≤ 150 ms And uncached prompts are generated at runtime with onset latency ≤ 500 ms
Noise-Gating for Output Clarity
Given noise gating is enabled When a spoken prompt with leading and trailing silence is played Then the output noise floor during silent segments is ≤ -60 dBFS And gate attack time is ≤ 50 ms and release time is ≤ 200 ms And no voiced segment is truncated (word boundary energy above -30 dBFS is preserved) And a toggle to enable/disable gating is available in the testing panel
Sub-150 ms Latency on Rep Events
Given rep detection events from the computer-vision engine When a cue is configured to fire on the event (start/end/peak) Then time from event timestamp to cue audio onset is ≤ 150 ms at the 95th percentile over 100 reps on a mid-tier device And worst-case latency is ≤ 200 ms And latency metrics are logged with timestamps and are exportable for QA
Testing Panel: Preview & Audio Routing
Given the testing panel is open When the clinician selects an output route (device speaker, wired headphones, Bluetooth) Then the current audio route is displayed and verified And left/right channel tests play to the correct channel And tapping Preview plays both a sample voice prompt and a sample tone using current settings And if the selected route is unavailable, an actionable error is shown and a fallback route is offered
Haptic Feedback Tuning
"As a patient, I want to adjust vibration strength and patterns so that I can feel cues without them being distracting or uncomfortable."
Description

Offer haptic configuration options, including intensity, pattern (short, long, double), and frequency tied to rep phases or error events. Provide a calibration test to feel patterns before saving and detect device capabilities to adapt haptics accordingly. Ensure haptic cues are synchronized with visual/audio prompts and respect system-level accessibility settings (e.g., reduced vibrations). Include fallbacks on devices without advanced haptics.

Acceptance Criteria
Intensity Calibration Preview and Save
Given a device that supports variable-intensity haptics, when the clinician adjusts the intensity slider from 0% to 100% in Calibration, then the preview plays within 150 ms of release and the perceived intensity increases monotonically across at least 5 discrete steps. Given a device that supports only on/off haptics, when the clinician opens Calibration, then the intensity control shows two states (Off/On) and variable levels are hidden or disabled. Given the clinician taps Save after previewing, when the settings screen is reopened for the same patient/protocol, then the selected intensity value is persisted and restored.
Pattern Selection with Tolerances and Preview
Given pattern options Short, Long, and Double, when the clinician selects each option and taps Preview, then the vibrations match these tolerances: Short 100 ms ±20 ms, Long 300 ms ±30 ms, Double two pulses of 100 ms separated by 150 ms ±20 ms. Given a device lacking multi-pulse support, when Double is selected, then the app emulates the pattern using supported APIs or displays a "Not supported" message and disables the selection. Given the clinician taps Save, when returning later, then the selected pattern is persisted for that patient/protocol.
Phase/Event-Coupled Haptic Frequency and Sync
Given rep phase detection is active, when the clinician maps haptics to Start, Concentric, Eccentric, and End phases and sets a cadence, then haptic cues fire at the detected phase events with end-to-end latency ≤50 ms relative to the visual overlay and audio cue. Given a form error event is raised, when Error Haptic is enabled, then an interrupting error pattern plays within 100 ms and is rate-limited to ≤2 per second. Given tracking is paused or the set ends, when no rep phases are detected, then no haptic cues fire and any ongoing vibration stops within 200 ms.
Respect System Accessibility: Reduced Vibrations
Given the OS setting Reduce Motion/Vibration or similar is enabled, when the app attempts to play haptics, then intensity is reduced by at least 50% or haptics are suppressed per platform guidance, and a non-blocking banner indicates "Haptics limited by system settings." Given the OS has haptics disabled globally, when entering the Haptics screen, then haptic controls are disabled, a notice is shown, and no vibration is triggered during previews or sessions.
Device Capability Detection and UI Adaptation
Given the app launches on a new device, when the Haptics screen loads, then the app queries device haptic capabilities and conditionally renders controls: intensity slider only for variable support, pattern options only for supported patterns, and frequency controls constrained to the device maximum. Given capabilities change (e.g., accessory connected/disconnected), when the app regains focus, then controls re-evaluate capabilities and update without requiring a restart.
Fallback Behavior on Devices Without Haptics
Given the device has no haptic engine, when a session starts, then no haptic API calls are made, no errors are shown to the user, and audio/visual cues continue as configured. Given no haptic engine, when opening Haptics settings, then a clear message explains that haptics are unavailable on this device and suggests using audio cues, and the Save button is disabled for haptic-only settings.
Per-Protocol & Per-Patient Defaults
"As a clinician, I want to save cue preferences as defaults for protocols and patients so that I don’t have to reconfigure settings every session."
Description

Allow clinicians to create, save, and apply cue presets as defaults at multiple scopes: global clinic, protocol/exercise library, and individual patient profile. Support inheritance and overrides (patient settings override protocol, which override global), bulk-apply to a plan of care, and quick reset to clinic standards. Include import/export of presets for sharing across clinicians and clinics while maintaining PHI separation. Persist settings securely and apply automatically when a patient starts an assigned exercise.

Acceptance Criteria
Save and Apply Global Clinic Default Cue Preset
Given I am an authenticated clinician with Clinic Admin permissions When I create a cue preset and save it as the Clinic Default Then the preset is persisted and labeled as Clinic Default in clinic settings And when any clinician assigns an exercise without protocol or patient overrides Then the clinic default preset is auto-applied at exercise start And an audit log entry records the action with user, scope, and timestamp
Save Protocol-Level Preset and Inheritance to Exercises
Given a protocol exists in the exercise library and a clinic default preset exists When I save a cue preset scoped to that protocol Then the protocol preset is displayed as active for that protocol And when a patient starts any exercise instance of that protocol without a patient override Then the protocol preset is applied instead of the clinic default
Patient-Level Override of Protocol Defaults
Given a patient has an assigned exercise from a protocol with a protocol-level preset When I save a cue preset scoped to that patient for the assigned exercise or protocol Then the patient preset is visible in the patient profile for that exercise/protocol And when the patient starts the assigned exercise Then the patient preset is applied over both protocol and clinic defaults And when I remove the patient preset Then the next start of the exercise reverts to the protocol preset (or clinic default if no protocol preset exists)
Bulk-Apply Preset to Plan of Care Exercises
Given a patient has a plan of care containing multiple exercises When I select Bulk Apply, choose a preset, choose target scope (patient or protocol), select exercises, and confirm Then the system applies the selected preset to all chosen exercises And the summary dialog displays the count of exercises updated equals the number selected, with any failures listed with reasons And starting any updated exercise uses the newly applied preset
Quick Reset to Clinic Standards
Given overrides exist at the patient or protocol scope When I select Reset to Clinic Standards and confirm Then the system removes the selected scope’s overrides and restores the clinic default And a success confirmation is shown and an audit log entry is recorded And subsequent exercise starts use the current clinic default
Import/Export Presets Without PHI Leakage
Given I have permission to manage presets When I export presets Then the export contains only clinic and protocol presets and no patient identifiers or PHI And the file includes metadata sufficient for import (scope, preset name, version, created-at) without PII And when I import the file into another clinic Then the presets are created as clinic/protocol presets in the destination clinic And duplicate names are resolved per chosen option (skip, overwrite, duplicate) And patient-scoped presets are excluded from export and are not created on import
Auto-Apply Presets on Exercise Start with Secure Persistence
Given a patient starts an assigned exercise in the app and applicable presets exist at clinic/protocol/patient scopes When the exercise start event occurs Then the system selects the highest-priority preset (patient > protocol > clinic) and applies it within 1 second And the applied settings persist locally for the session even if network connectivity drops And stored presets are encrypted at rest and are retrievable only by authenticated clinician accounts
Real-time Preview & Latency Meter
"As a clinician, I want to preview how cues behave during movement so that I can confirm they are timely and clear before prescribing them."
Description

Provide an interactive preview mode where clinicians and patients can test visual, audio, and haptic cues in real time using sample movements or recorded clips. Display an end-to-end latency meter for each cue channel and surface recommendations to reduce delay (e.g., disable high-load filters). Include safe-volume and haptic checks, and allow one-tap revert to previous settings after testing.

Acceptance Criteria
Preview Mode with Sample Movements and Recorded Clips
Given the clinician opens Cue Tuner in Preview Mode And at least one cue channel (visual, audio, or haptic) is enabled When they select "Sample Movements" and tap Play Then cues render in real time synchronized to sample motion and start within 2 seconds When they select a recorded clip from the device gallery and tap Play Then playback begins within 2 seconds and cue timing stays within ±100 ms of clip timestamps for the first 2 minutes When they tap Pause/Resume or scrub the timeline Then the preview responds within 150 ms and cues update accordingly without app crash or freeze
Per-Channel Latency Meter Display and Accuracy
Given Preview Mode is active When any cue is emitted Then a per-channel end-to-end latency (ms) is displayed for Visual, Audio, and Haptic channels And the meter updates at least 2 times per second while events occur And disabled or unsupported channels display "N/A" When compared to an external instrumented baseline Then the displayed latency is within ±10 ms or ±10% (whichever is larger) for each channel
Latency Reduction Recommendations and Actions
Given Preview Mode is active And any channel’s 5-second rolling average latency exceeds its threshold (Visual >120 ms, Audio >100 ms, Haptic >150 ms) When this condition is detected Then a recommendations panel appears listing at least one actionable step (e.g., disable high-load filters, lower overlay resolution, reduce haptic frequency) And each recommendation provides Apply, Learn More, and Dismiss actions When the user applies any recommendation Then settings change immediately, re-measurement begins, and the affected channel’s average latency decreases by ≥15% within 5 seconds or a message states "No measurable improvement" and offers Revert
Safe-Volume Check and Test Tone
Given Preview Mode with audio cues enabled When device media volume >80% or the OS reports unsafe volume Then a warning banner appears and the Test Tone button requires one-tap confirmation before playback When Test Tone is played Then tone duration is ≤1 second, respects OS safe-volume caps, and reflects the configured cue volume And the warning updates to indicate successful test When headphones are connected Then the warning text references headphone safety and the meter remains visible
Haptic Strength Safety and Compatibility
Given Preview Mode with haptic cues enabled When the device supports haptics Then a 3‑pulse test can be triggered at the configured strength and frequency And pulses are emitted within ±20 ms of schedule And strength cannot exceed the device API maximum; unsupported frequencies are clamped with an inline validation message When the device does not support haptics Then haptic controls are disabled, a message "Haptics unavailable on this device" is shown, and the latency meter for haptics displays "N/A"
One‑Tap Revert to Previous Settings
Given the user has changed one or more cue settings during the preview session When they tap "Revert to Previous Settings" Then all cue settings restore to the last saved defaults for the current protocol and patient profile within 1 second And a confirmation toast "Settings reverted" is shown And no test-session changes persist after exiting Preview unless explicitly saved
Accessible Visual Overlay Preview
Given Preview Mode with visual overlay controls visible When the user adjusts size, contrast, or selects a color‑blind palette Then overlay updates are rendered within 100 ms of each change And high-contrast mode meets WCAG 2.1 AA (≥4.5:1) for text and icons And color‑blind palettes produce distinct cue elements for protanopia, deuteranopia, and tritanopia in the simulated preview And a one-tap Reset restores default visual overlay settings
Capability Detection & Fallbacks
"As a patient, I want the app to adapt to my phone’s capabilities so that cues always work reliably without me troubleshooting settings."
Description

Automatically detect device capabilities (TTS availability, speaker/headphone state, haptic engine level, screen brightness/contrast range) and adjust available options and defaults. Gracefully degrade to supported cues (e.g., beeps instead of TTS, higher-contrast palette when brightness is limited) and inform users of any limitations. Maintain consistent behavior across iOS and Android, with platform-specific optimizations hidden behind a common interface.

Acceptance Criteria
Auto-Detect TTS and Audio Route on Launch
Given the app launches to Cue Tuner, When capability probing initializes, Then TTS availability is determined within 1500 ms Given TTS is unavailable, When audio cues are required by the active profile, Then the app sets the default audio cue to "Beep" and disables TTS selection in the UI Given a wired or Bluetooth headset is the active audio route, When the first cue is played, Then audio output uses the current route and no error toast is shown Given device permissions are insufficient to probe a capability, When probing runs, Then the app sets safe defaults and shows a single permission notice with a link to settings
Runtime Audio Route Change During Session
Given a session is active with audio cues, When headphones are connected, Then subsequent cues route to the new output within 1 second without interrupting the current cue Given headphones are disconnected, When the next cue plays, Then output switches to device speaker and a one-time toast "Headphones disconnected—using speaker" appears Given the audio route changes repeatedly within 10 seconds, When cues play, Then messaging is debounced to at most 1 notice per 10 seconds and no app crash occurs
Haptic Capability Detection and Fallback
Given the device lacks a haptic engine or supports only low-intensity vibration, When the Cue Tuner haptic settings are opened, Then the intensity slider range is constrained to supported values or the haptic option is disabled Given haptics are unsupported, When a profile with haptic-only cues is loaded, Then the system switches to audio or visual cues according to the configured fallback order and records the mapping Given haptics are partially supported, When the user selects an unsupported frequency, Then validation prevents the selection and shows an inline message "Not supported on this device"
Low Brightness/Contrast Adaptation
Given system brightness is below 20% or the device reports high-contrast mode unavailable, When visual overlays render, Then the app applies a high-contrast palette meeting WCAG AA (≥4.5:1) and increases overlay stroke by at least 2 px Given the device cannot achieve the required contrast, When overlays render, Then the app switches to the highest-contrast palette and displays a one-time banner "Limited display contrast—using high-contrast visuals" Given brightness increases above the threshold for 5 seconds, When overlays refresh, Then the original palette is restored without user intervention
Options Filtering and Default Mapping in Cue Tuner UI
Given device capability assessment is complete, When the Cue Tuner settings panel is opened, Then unsupported options are hidden or disabled with an info icon that explains the limitation Given a saved patient/protocol default uses an unsupported cue on this device, When the profile loads, Then the default maps deterministically to the first supported fallback in the configured priority order and is persisted for this device only Given the clinician manually overrides the mapped default to a supported option, When the session starts, Then the override is respected and no further fallback is applied
User Messaging for Capability Limitations
Given any selected cue type is downgraded due to capability limits, When the downgrade occurs, Then the user receives a single, concise message within 1 second that states the reason and the chosen fallback Given the session continues for more than 10 minutes without further changes, When no new limitations arise, Then no additional limitation messages are shown Given device locale is Spanish, When the message is shown, Then it is localized and readable by screen readers with the correct accessibility role
Cross-Platform Behavior Consistency
Given a test device pair (iOS and Android) with equivalent capabilities (TTS unavailable, haptics low, brightness low), When identical profiles are loaded, Then the same set of options are enabled/disabled and the same defaults/fallbacks are applied Given platform-specific optimizations are active, When the session runs for 5 minutes, Then user-visible differences in cue timing do not exceed ±100 ms for audio/haptic and ±16 ms for visual overlays Given identical limitation events occur, When messages are shown, Then wording and severity are equivalent, differing only by platform nomenclature (e.g., "Settings" vs "Preferences")
Secure Storage & Cross-Device Sync
"As a clinic admin, I want cue settings to sync securely across devices so that clinicians and patients have consistent experiences without reconfiguration."
Description

Store cue presets and defaults encrypted at rest and in transit, segregating PHI from reusable preset metadata. Sync clinician and patient cue profiles across devices and sessions with versioning and rollback. Provide offline-first behavior with conflict resolution when reconnecting. Ensure compliance with HIPAA/GDPR data handling, least-privilege access, and auditable changes to cue configurations.

Acceptance Criteria
Encrypted Storage and Transmission with PHI Segregation
Given any cue preset, default, or patient profile is persisted locally, When it is written to storage, Then it is encrypted at rest using AES‑256‑GCM (or platform‑equivalent) with keys stored in OS secure enclave/KMS and rotated at least every 180 days. Given any data is transmitted between app and backend, When a network request is made, Then TLS 1.2+ with modern ciphers is enforced and certificates are fully validated (no insecure fallbacks). Given schemas for PHI and reusable preset metadata, When data is saved, Then PHI is stored in a separate collection/namespace with distinct access scopes and preset metadata contains no direct identifiers (name, email, DOB, MRN). Given logging and analytics are enabled, When events are recorded, Then no PHI or encryption keys are logged; attempts to log PHI are blocked by automated checks.
Cross-Device Sync and Latency SLA
Given a clinician updates a cue protocol default on Device A, When both Device A and Device B are online, Then the change propagates and is visible on Device B within 30 seconds at the 95th percentile. Given a patient signs in on a new device, When network connectivity is available, Then the latest cue profile and defaults are hydrated before the first exercise session and match the server version hash. Given transient network loss during sync, When connectivity is restored, Then pending changes under 1 MB complete syncing within 60 seconds without duplication. Given an expired auth token, When background sync runs, Then no data is transmitted; sync retries after token refresh and sensitive data remains protected.
Versioning and Rollback of Cue Configurations
Given any create or update to cue presets, defaults, or patient overrides, When the change is saved, Then a new immutable version record is created with userId, UTC timestamp (ISO8601), change summary, and content hash. Given an authorized clinician selects a prior version, When rollback is confirmed, Then a new head version is created matching the selected version and full history remains intact. Given a configuration was deleted, When restore is requested within 90 days, Then it can be recovered from the version store or backups without loss. Given two versions are selected, When a diff is requested, Then a human‑readable field‑level diff is produced accurately.
Offline-First Editing with Deterministic Conflict Resolution
Given a clinician edits cue presets while offline, When the app is closed and reopened, Then unsynced edits persist locally in encrypted storage and remain queued for sync. Given the same field is edited offline on two devices, When both reconnect, Then a conflict is detected and the clinician is prompted with side‑by‑side comparison; no silent overwrite occurs. Given non‑overlapping field edits occur on multiple devices, When syncing, Then a field‑level merge is performed automatically and a merged version is created with full audit trail. Given a conflict prompt is shown, When the clinician selects a resolution, Then a single new version reflecting the choice is created and prior attempts remain in history. Given an unresolved conflict exists, When 24 hours elapse, Then a notification is sent to the assigned clinician and the last consistent version continues to be used by patients.
Least-Privilege Access Controls and Data Boundaries
Given role definitions (Patient, Clinician, Admin, Support), When accessing cue data, Then RBAC enforces least‑privilege: Patients only their own profiles/preferences; Clinicians only assigned patients; Support no PHI. Given any API request, When authorization is evaluated, Then scope‑limited tokens are required; out‑of‑scope access returns 403 and is audited. Given sensitive screens display PHI, When the app is foregrounded, Then platform screen‑protection flags are enabled (e.g., iOS screen shield, Android FLAG_SECURE) according to clinic policy. Given a data export is initiated, When the user lacks documented consent, Then only de‑identified preset metadata may be exported; PHI exports require recorded consent and are scoped to the request.
Tamper-Evident Audit Logging of Cue Changes
Given any create/update/delete/assign/rollback of cue configurations, When the action is committed, Then an audit entry is recorded with actorId, patientId (if applicable), deviceId, IP, UTC timestamp, action type, and before/after field values. Given audit storage, When entries are written, Then they are immutable and tamper‑evident via hash chaining or write‑once storage; modification attempts fail and are logged. Given an admin requests audit export, When the request is authorized, Then logs for the specified time window are exportable as CSV/JSON within 72 hours. Given regulatory retention requirements, When evaluating storage policy, Then audit logs are retained for at least 6 years and are discoverable for compliance reviews.
GDPR/HIPAA Data Subject Rights and Data Handling
Given a verified data access request (DSAR), When initiated by a patient, Then all their PHI and associated cue configurations are exported in structured JSON within 30 days. Given a verified deletion request, When permitted by clinic policy and law, Then PHI is erased from primary storage within 30 days and from backups within 90 days; preset metadata is de‑identified; minimal lawful audit record is retained. Given an EU‑resident organization, When data is stored/processed, Then PHI remains within approved EU regions; any cross‑border transfer uses approved safeguards (e.g., SCCs) with records maintained. Given consent requirements, When a clinician assigns cues to a patient in GDPR jurisdictions, Then lawful basis/consent is captured with timestamp and processing is blocked if consent is absent.

Tempo Coach

An adaptive metronome that guides eccentric and concentric phases with timing pips and a progress bar. Auto-calibrates to the prescribed tempo and the patient’s current capability, nudging pace up or down to prevent momentum cheating and improve muscle activation. Post-set, it summarizes tempo adherence so clinicians can fine-tune prescriptions.

Requirements

Auto Tempo Calibration
"As a patient, I want the metronome to calibrate to my prescribed tempo and current capability so that I can follow an achievable pace without guesswork."
Description

Automatically calibrates Tempo Coach to the clinician-prescribed tempo and the patient’s current capability by sampling early repetitions, estimating per-phase (eccentric/concentric) durations, and aligning cues within safe, configurable bounds. Uses MoveMate’s computer-vision phase detection to measure actual cadence in real time, then adapts mid-set if sustained deviation is detected while never exceeding clinician-set min/max phase durations. Supports per-exercise and per-side calibration profiles, persists learned parameters across sessions, and gracefully re-calibrates after rest or when form changes are detected. Handles missed detections and partial reps, indicates calibration lock-on in the UI, and logs calibration parameters for clinician review.

Acceptance Criteria
Initial Calibration from Early Reps
Given a prescribed tempo with eccentric and concentric targets and per‑phase min/max bounds And phase detection confidence is ≥ 0.80 When the patient completes 3 valid reps Then the system computes average eccentric and concentric durations from those reps And sets cue targets within ±10% of the measured averages, clamped to the configured bounds And displays a "Locked" status within 300 ms after the 3rd valid rep And any rep with missing phase or confidence < 0.80 is excluded from the 3‑rep sample
Safety Bounds Enforcement
Given clinician‑configured min and max duration per phase When tempo targets are set during initial calibration or adaptation Then no target duration is set below the min or above the max for that phase And if a calculated target would violate bounds, it is clamped to the nearest bound and a "Clamped" event is recorded for that set
Mid‑Set Adaptation for Sustained Deviation
Given calibration is Locked When measured duration for a phase deviates by > 12% from its current target for ≥ 3 consecutive valid reps Then update that phase’s target to the rolling average of the last 3 valid reps, clamped to bounds And do not adapt for deviations persisting < 3 consecutive valid reps And log adaptation time, old target, new target, and deviation percentage
Missed Detections and Partial Reps Handling
Given a rep has missing phase boundaries or confidence < 0.80 When the rep completes Then mark the rep Invalid and exclude it from calibration, adaptation, and adherence summaries And continue cueing with the last Locked targets without change And if 2 consecutive Invalid reps occur, switch status to "Recalibrating" and re‑enter sampling until 2 consecutive valid reps are collected
Profile Persistence and Reuse Across Sessions and Sides
Given a set completes with Locked calibration When the set ends Then persist per‑phase targets, bounds, sample size, and confidence keyed by exercise ID and side (left/right) And when starting the same exercise and side in a later session, preload the last learned targets within 200 ms and begin in Sampling mode to verify And overwrite stored targets only after a set completes with ≥ 3 valid reps and no more than 1 "Clamped" event
Re‑Calibration After Rest or Form Change
Given rest since last rep exceeds 90 seconds or the CV engine flags a form change (e.g., ROM shift ≥ 20% or pose pattern score crosses threshold) When the next rep begins Then switch status to "Recalibrating", mute tempo nudges for that rep, and collect 2 consecutive valid reps for re‑lock And restore cueing with updated targets within 300 ms after the 2nd valid rep And log the trigger (Rest or FormChange), thresholds, and re‑lock timestamps
Calibration Status UI and Clinician Logging
Given any calibration state change (Sampling, Locked, Recalibrating, Clamped) When the state changes Then update the UI indicator and progress bar label within 200 ms with a non‑color shape/icon cue and accessible text (WCAG AA) And expose a screen‑reader label reflecting the current state And record per set: phase targets, min/max bounds, sample sizes, average measured durations, deviation thresholds, adaptations, clamping events, and timestamps; retrievable via clinician dashboard and export API
Dual-Phase Guided Cues
"As a patient, I want clear audio, haptic, and visual cues for each phase so that I can maintain the correct pace and form throughout every rep."
Description

Provides clear, phase-specific guidance using distinct audio pips, haptic taps, and a segmented progress bar that visualize and sonify eccentric and concentric timing. Delivers a configurable count-in, color- and pitch-differentiated phase cues, and on-screen progress that tracks phase completion percentages. Integrates with MoveMate’s rep and phase detection to keep cues synchronized, supports variable phase lengths (e.g., 3-1-2), and remains legible in bright and low-light conditions. Includes accessibility options for volume limits, high-contrast visuals, adjustable haptic intensity, and large text. Ensures cues remain consistent when frame rates drop by decoupling rendering from the timing engine.

Acceptance Criteria
Phase-Specific Audio and Haptic Cue Differentiation
Given a prescribed tempo with distinct eccentric and concentric phases And audio and haptics are enabled When a rep starts and phases transition Then the app plays an audio pip at each phase boundary with a distinct pitch per phase And the pip onset occurs within 50 ms of the detected phase boundary And a haptic tap is emitted at each phase boundary within 50 ms of the phase boundary And no two pips or taps overlap or double-fire within 150 ms And muting either audio or haptics disables only that modality
Configurable Count-In Before First Rep
Given a clinician-prescribed count-in length between 0 and 8 beats And a target tempo is set When the set is armed Then the app renders a visual countdown and emits evenly spaced count-in pips at the target beat interval (±2%) And the user can skip the count-in to start immediately And the count-in length and sound volume reflect the saved prescription And no rep or phase detection begins until the count-in completes or is skipped
Segmented Progress Bar with Phase Completion Percentages
Given a tempo pattern with N phases and assigned durations When a rep is in progress Then the progress bar displays N visibly segmented phases with color differentiation per phase And the active segment fills smoothly from 0% to 100% during its phase And an on-segment percentage label updates at least 10 times per second And phase completion reaches 100% within 50 ms of the timing engine’s phase end And colors meet a minimum 3:1 contrast against the background And the percentage text meets a 4.5:1 contrast ratio in both light and dark themes
Variable Tempo Patterns and Synchronization to Detection
Given a clinician sets a tempo pattern (e.g., 3-1-2 or 2-0-2) and rep/phase detection is active When the patient transitions phases early or late relative to the metronome Then the timing engine snaps the current phase boundary to the detected transition within 100 ms And subsequent phase durations are preserved relative to the prescribed pattern And audio, haptic, and progress cues realign to the snapped boundary with no audible stutter And isometric holds (zero-duration phases) produce a single boundary cue only And logged tempo adherence reflects the snapped timing, not the pre-snapped schedule
Legibility in Bright and Low-Light Conditions
Given ambient light is high (≥10,000 lux) or low (≤5 lux) When Tempo Coach is displayed during a set Then the UI automatically uses or offers a theme that maintains at least 4.5:1 contrast for text and 3:1 for graphical elements And progress segments, boundary markers, and percentage text remain readable without glare or washout And there is no critical information conveyed by color alone; a secondary indicator (label or pattern) is present
Accessibility Controls for Audio, Visuals, and Haptics
Given the user opens Tempo Coach settings When adjusting accessibility options Then audio volume can be limited to a user-defined maximum and cannot exceed the OS safe volume limit And high-contrast mode can be toggled on/off and respects the OS accessibility setting And haptic intensity is adjustable across at least three levels and respects OS haptics toggle And large text mode increases in-app cue labels by at least 30% and respects OS dynamic type And all accessibility settings persist per user across app restarts
Timing Engine Decoupled from Rendering Under Frame Drops
Given the device experiences rendering frame rate drops to 20 FPS or lower When a set is running Then audio pip intervals deviate from target by no more than ±1% And cumulative timing drift relative to the timing engine is less than 50 ms per minute And haptic taps align to phase boundaries within 50 ms And the progress bar may drop frames but resumes showing the correct phase and completion upon recovery with no skipped phase transitions
Adaptive Pace Nudging
"As a patient, I want gentle, real-time nudges to correct my pace so that I avoid momentum cheating and stay aligned with my prescribed tempo."
Description

Adjusts cue timing in real time with micro-shifts to gently speed up or slow down the user when they drift from target tempo, using a moving window of recent reps and damping to avoid oscillation. Displays concise prompts like “slightly slower” or “hold the bottom” and reflects adjustments in the progress bar. Respects clinician-prescribed tempo bounds, fatigue indicators, and safety thresholds; never applies abrupt step changes. Allows patients to temporarily pause nudging or lock to strict tempo, and reverts to baseline when adherence stabilizes. Logs nudge frequency and magnitude for post-set analysis and clinician insights.

Acceptance Criteria
Real-Time Micro-Shift Tempo Adjustment and Damping
Given a prescribed eccentric and concentric duration per rep and a moving window of the last 5 reps When the windowed mean absolute phase drift exceeds 7% for 2 consecutive reps Then adjust the next phase cue timing in the corrective direction by a micro-shift no greater than 3% of the phase duration And ensure the change in cue interval between consecutive reps differs by no more than 2% (damping) And do not apply more than one sign reversal of micro-shifts within any rolling 4-rep window while measured drift remains same-signed
Bounds, Safety, and Fatigue Respect
Given clinician-prescribed tempo bounds [ecc_min, ecc_max] and [con_min, con_max] Then cue intervals shall never be set outside these bounds Given a safety/fatigue flag is active Then suppress any micro-shifts that would increase cadence and cap all micro-shifts to 50% of normal magnitude And if a safety stop flag is active, pause all nudging and display "Safety hold" within 200 ms
Concise Prompts and Progress Bar Reflection
When a micro-shift ≥ 1% is applied Then display a prompt from the approved list {"slightly slower","slightly faster","hold the bottom","smooth up"} within 200 ms of the decision And update the progress bar so its phase timing matches the adjusted cue intervals within ±80 ms And align audio pips with progress bar endpoints within ±60 ms
Pause Nudging and Strict Tempo Modes
Given the patient taps "Pause Nudging" Then suspend all micro-shifts for 20 seconds or until resumed, and show a visible "Nudging paused" state Given the patient enables "Strict Tempo" during a set Then lock cues to the prescribed tempo with zero micro-shifts until the set ends or the mode is toggled off And log all mode toggles with timestamp and resulting mode state
Stabilization Reversion to Baseline
Given adherence error for both phases remains within ±3% for 4 consecutive reps Then decay any accumulated micro-shifts back to baseline tempo so residual offset is < 1% within the next 3 reps And ensure no single-rep change exceeds 2% during the reversion
Nudge Analytics Logging and Post-Set Summary
During the set, for each rep, log: timestamp, phase-level drift (%), applied micro-shift (% and sign), active mode (normal/paused/strict), bounds clamps, and safety/fatigue flags After set completion, generate a summary containing: total reps, nudged reps count and %, mean and max micro-shift magnitude, time-in-bounds %, average drift, number of safety suppressions, and a histogram of micro-shift magnitudes in 1% buckets Make the summary available in the clinician dashboard and patient session within 10 seconds of set end
Momentum Cheating Detection
"As a clinician, I want the system to detect and flag momentum-based reps so that I can correct technique and ensure proper muscle activation."
Description

Identifies reps that rely on momentum or insufficient control by analyzing velocity and acceleration patterns from computer-vision pose trajectories, detecting rapid ballistic concentric phases, uncontrolled eccentrics, and skipped end-range pauses. Flags suspect reps in real time, issues subtle guidance to slow or stabilize control, and marks them for post-set review. Supports exercise-specific thresholds, tolerances for advanced prescriptions (e.g., power phases when intended), and excludes false positives due to occlusion or camera jitter. Summarizes cheating patterns to inform clinician adjustments without storing identifiable video frames, honoring privacy settings.

Acceptance Criteria
Real-Time Ballistic Concentric Detection and Nudge
Given an exercise prescription with a target concentric duration D_c and Tempo Coach active And pose tracking confidence ≥ 0.6 for at least 80% of concentric-phase frames When a rep’s concentric duration is < 0.70 × D_c OR peak normalized acceleration exceeds the moving-baseline mean by ≥ 2.5 standard deviations for ≥ 100 ms Then flag the rep as "Ballistic Concentric" And display an on-screen "Slow concentric" prompt with a distinct timing pip within 250 ms of detection And reduce the next-rep concentric pip rate by 10% (bounded to ±15% of the prescribed tempo) And mark the rep with index, timestamp, and reason for post-set review
Uncontrolled Eccentric Detection and Stabilization Prompt
Given a target eccentric duration D_e and Tempo Coach active And pose tracking confidence ≥ 0.6 for at least 80% of eccentric-phase frames When a rep’s eccentric duration is < 0.80 × D_e OR absolute velocity spikes above 2.0 × the median eccentric velocity of prior valid reps for ≥ 80 ms Then flag the rep as "Uncontrolled Eccentric" And display "Control the descent" within 250 ms of detection And increase the next-rep eccentric pip interval by 10% (bounded to ±15% of the prescribed tempo) And mark the rep with index, timestamp, and reason for post-set review
End-Range Pause Compliance Detection
Given a prescription specifying an end-range pause duration D_hold And pose tracking confidence ≥ 0.6 for at least 80% of hold-phase frames When the measured end-range hold is < 0.90 × D_hold Then flag the rep as "Pause Skipped" And display "Hold at end range" within 250 ms of detection And mark the rep with index, timestamp, and reason for post-set review
Exercise-Specific Threshold Profiles and Power-Phase Exceptions
Given an exercise profile with phase-specific thresholds and optional power-phase allowances When the profile permits ballistic concentric for the current set Then do not flag "Ballistic Concentric" for concentric phases within the allowed power window And continue to evaluate eccentric control and pauses as usual When the profile does not permit ballistic concentric Then apply detection thresholds (durations and acceleration/velocity limits) defined by the exercise profile And persist these thresholds per exercise template and apply them automatically on subsequent sets
Occlusion and Camera Jitter False-Positive Suppression
Given pose tracking confidence < 0.6 for > 30% of frames in a phase OR keypoint dropout > 20% of frames OR camera jitter detected as frame-to-frame background displacement > 5 px RMS sustained for ≥ 300 ms When any of the above occurs during a rep Then suppress momentum-cheating flags for that phase And label the segment as "Uncertain" without issuing a real-time prompt And record the suppression reason for post-set review And exclude suppressed segments from adherence scoring
Post-Set Cheating Summary and Flagged-Rep Review (Privacy-Preserving)
Given a completed set When the set ends Then generate a summary within 5 seconds containing counts and percentages of flagged reps by type (Ballistic Concentric, Uncontrolled Eccentric, Pause Skipped), rep indices, and timestamps And present the summary in the patient view and sync it to the clinician dashboard And include only derived metrics and skeletal keypoints; do not store identifiable video frames And honor privacy settings so that if frame storage is disabled, only aggregate metrics are retained
Tempo-Adaptive Thresholding and Auto-Calibration
Given a prescribed tempo with target concentric and eccentric durations (D_c, D_e) And at least 3 valid reps have been observed When the median measured durations differ from targets by > 15% Then adjust detection thresholds and Tempo Coach pip intervals toward the measured medians by up to 10% for the next 3 reps And do not exceed ±20% deviation from the prescription unless explicitly allowed by the exercise profile And log calibration adjustments with before/after values for clinician review
Post-Set Tempo Adherence Summary
"As a clinician, I want a clear summary of tempo adherence after each set so that I can fine-tune prescriptions and monitor progress efficiently."
Description

Generates a concise per-set report showing target vs. actual phase durations, adherence percentage within tolerance bands, distribution of eccentric/concentric times, count of nudges applied, and momentum-flagged reps. Provides trend indicators across sets, simple patient-friendly grading, and a clinician detail view with raw metrics and calibration changes. Integrates with MoveMate dashboards, exports metrics to patient timelines, and triggers optional clinician alerts when adherence drops below thresholds. Stores summaries in structured form for analytics and future prescription tuning.

Acceptance Criteria
Per-Set Tempo Summary Display
Given a patient completes a tracked set with a prescribed eccentric and concentric tempo and a tolerance band configured When the set ends and the summary is generated Then the summary displays target_eccentric_s, target_concentric_s, avg_eccentric_s, avg_concentric_s, and adherence_pct for the set And adherence_pct is computed as (count of reps where both phases are within tolerance) / (total valid reps) rounded to the nearest whole percent And all durations are shown in seconds to two decimal places And the summary renders within 2 seconds of set completion
Nudges and Momentum Flags Reporting
Given Tempo Coach issues pace nudges and momentum cheating is detected during a set When the post-set summary is generated Then the summary shows nudge_count_total and a breakdown of nudge_speed_up_count and nudge_slow_down_count And the summary shows momentum_flag_count and the indices of momentum-flagged reps And momentum-flagged reps are excluded from the adherence numerator but included in the denominator
Phase Time Distribution and Tolerance Bands
Given a completed set with per-rep eccentric and concentric durations When the post-set summary is displayed Then the summary shows, for each phase, the percentage of reps below_tolerance, within_tolerance, and above_tolerance And the tolerance band used is the exercise-configured band; if none is configured, a system default of ±10% is applied and displayed And the summary also displays per-phase median_s and IQR_s
Within-Session Trend Indicators
Given a session with at least two sets of the same exercise When viewing the post-set summary for the current set Then the summary shows an adherence trend indicator vs the previous set (up, down, or no change) with the numeric delta in percentage points And the summary shows avg_eccentric_s and avg_concentric_s deltas vs the previous set And if three or more sets exist, a trend arrow reflects the slope over the last three sets using linear regression
Patient Grade Generation
Given a set is completed and a patient-facing summary is generated When computing the patient grade Then the grade is assigned as A (>=90%), B (80–89%), C (70–79%), D (60–69%), F (<60%) based on adherence_pct And the grade is displayed with color coding: A/B green, C amber, D/F red And the grade is shown alongside adherence_pct
Clinician Detail View and Calibration Log
Given a clinician opens the detail view for a completed set When the detail view loads Then it lists per-rep raw metrics: rep_index, eccentric_s, concentric_s And it lists each auto-calibration event with rep_index (or timestamp), prior_target_{eccentric_s,concentric_s}, new_target_{eccentric_s,concentric_s}, percent_change, and trigger_reason And the detail view reflects the same adherence calculation rules as the summary
Dashboard Integration, Timeline Export, Alerts, and Structured Storage
Given a set summary is finalized When data syncs to the backend (immediately online or within 5 minutes of reconnection if offline) Then the clinician dashboard shows the set under the correct patient/session with adherence_pct, avg_eccentric_s, avg_concentric_s, nudge_count_total, and momentum_flag_count And the patient timeline receives a single event with timestamp = set_end_time and stored metrics And if adherence_pct is below the configured threshold, an alert is created and clinician notification is sent according to preferences And the analytics store persists a structured record including: set_id, patient_id, session_id, exercise_id, start_time, end_time, rep_count, valid_rep_count, target_eccentric_s, target_concentric_s, avg_eccentric_s, avg_concentric_s, adherence_pct, within_tolerance_count, below_tolerance_count, above_tolerance_count, momentum_flag_count, nudge_count_total, nudge_speed_up_count, nudge_slow_down_count, and calibration_events[]
Clinician Tempo Prescription Sync
"As a clinician, I want to set and update tempo prescriptions that automatically sync to patients so that in-workout guidance matches my treatment plan."
Description

Enables clinicians to author, version, and assign tempo prescriptions (e.g., 3-1-2, fixed seconds per phase, or ranges) that sync to the patient’s device and exercises. Supports per-exercise defaults, phase bounds, tolerance bands, set/rep schemes, and whether adaptive nudging is allowed. Handles offline scenarios with queued updates, shows patients the latest effective prescription, and records version history with effective dates. Validates conflicts (e.g., unsafe bounds vs. patient calibration), enforces clinician locks when required, and exposes prescription parameters to summaries and analytics.

Acceptance Criteria
Author and Version Tempo Prescription
Given a clinician is in the plan builder for patient P and exercise E When they create a tempo prescription specifying phases via pattern (e.g., 3-1-2), explicit per-phase seconds, or per-phase min–max ranges Then the system validates format, requires all phases, and rejects invalid inputs with field-level errors Given the clinician sets per-exercise defaults, tolerance bands (+/− seconds or % per phase), set/rep scheme, and the adaptive nudging allowed flag When they save the prescription Then these parameters are persisted per exercise and are retrievable via API Given an existing prescription for E When the clinician saves changes Then a new version is created with a monotonically increasing version number and an effective_date that defaults to now unless a future datetime is chosen And prior versions become read-only and remain in version history When the save succeeds Then the API responds 201 with version_id and effective_date And an audit log records user_id, patient_id, exercise_id, timestamp, and a structured diff of changed fields
Assign and Sync Latest Effective Prescription to Patient Devices
Given patient device D has network connectivity When a new or updated tempo prescription for exercise E is effective Then D fetches the latest effective version within 10 seconds of app foreground or within 60 seconds during background sync When D displays exercise E Then it shows the effective tempo parameters (phases, tolerance, adaptive flag) and a version label (e.g., v3, effective 2025-10-01T09:00Z) Given multiple scheduled future versions exist When determining which to apply Then D selects the version with effective_date <= current UTC on the device and ignores future versions Given a tempo update is published while a session is in progress When the patient completes the current set Then D applies the new version starting with the next set and shows a non-blocking notice of the update Given a tempo assignment is removed by the clinician When sync occurs Then D removes the prescription from E within 60 seconds and reverts to the exercise default
Offline Queued Updates and Ordering
Given the clinician app is offline When the clinician saves changes to a tempo prescription Then the change is queued locally with status=pending and a client-generated request_id When connectivity is restored Then queued updates are sent in creation order and the server resolves final ordering by server_received_timestamp And the client updates local state with authoritative version_id values Given the patient device is offline when updates are made When the patient opens exercise E Then the device uses the last cached effective version and shows an indicator "Using cached tempo" When the device reconnects Then it fetches all versions effective since its last sync and applies the correct one based on effective_date Given a queued client update is superseded by a newer server version When reconciliation runs Then the client discards the older queued update and records a "superseded" event
Validate Unsafe Bounds Against Patient Calibration
Given patient P has calibration data with safe per-phase bounds for exercise E When a clinician enters per-phase ranges or tolerance bands that violate these bounds Then Save is blocked with inline errors identifying offending phases, entered values, and allowed ranges Given the device supports timing granularity of 100 ms When a clinician enters tolerance granularity below 100 ms Then the UI prevents entry and displays the minimum allowable granularity Given P’s calibration indicates a minimum safe eccentric duration D_min for E When a fixed tempo is set below D_min Then the system suggests the nearest safe value and disallows Save until adjusted When validation fails via API Then the response is 400 with machine-readable error codes per field (e.g., phase_range_out_of_bounds, tolerance_too_fine)
Enforce Clinician Locks on Patient Device and Server
Given a tempo prescription is marked locked by the clinician When the patient opens exercise E Then controls to change tempo or adaptive nudging are disabled and labeled "Clinician locked" When the patient attempts to modify tempo via any in-app control Then the change is rejected, a non-intrusive message is shown, and a security event is logged When a clinician without required role attempts to modify a locked prescription Then the server rejects the update with 403 and logs the attempt Given the device is offline When evaluating lock state Then the device enforces the last known lock flag until a server update clears it
Expose Prescription Parameters to Summaries and Analytics
Given a set is completed under exercise E When generating the post-set summary Then it includes assigned tempo (pattern or per-phase seconds/ranges), tolerance bands, adaptive nudging allowed flag, version_id, and effective_date When computing adherence Then per-phase metrics are provided: percent within tolerance, average deviation (ms), time under/over (s), and adaptive_nudge_count When viewing the clinician dashboard for E Then version history is shown with effective date ranges and counts of sets completed under each version When exporting analytics or calling the analytics API Then per-set records include patient_id, exercise_id, version_id, phase metrics, adaptive_nudge_count, and a boolean within_prescribed_tolerance And records are available within 60 seconds of set completion and pass schema validation
Low-Latency Offline Cueing Engine
"As a patient, I want perfectly timed cues even without connectivity so that my workouts remain precise and uninterrupted."
Description

Implements an on-device timing engine that drives audio, haptic, and visual cues with sub-20ms jitter and resilient synchronization independent of camera frame rate and network conditions. Supports offline operation, power-efficient scheduling, audio session management (ducking and interruptions), screen-locked mode, and recovery when the app regains focus. Maintains phase alignment during transient performance drops by predicting upcoming beats, and reconciles with computer-vision phase detection without abrupt jumps. Provides cross-platform abstractions and automated latency tests to ensure consistent tempo delivery across devices.

Acceptance Criteria
Offline Low-Jitter Tempo Delivery
Given the device is in airplane mode with all network interfaces disabled And camera tracking is active at any available frame rate When the user starts Tempo Coach at 40, 60, 90, and 120 BPM Then inter-beat jitter is <= 20 ms at p95 and <= 35 ms at p99 over a continuous 2-minute set for each tempo And the number of emitted beats equals the expected count (0 missed or duplicate cues) And the first cue occurs within <= 120 ms of the start command
Modality Synchronization Independent of Camera/Network
Given camera frame rate fluctuates between 24-60 fps and may stall up to 200 ms And network connectivity toggles between online and offline When Tempo Coach runs at 60 BPM Then audio-haptic skew is <= 12 ms at p95 (<= 20 ms max) And audio-visual skew is <= 16 ms at p95 (<= 25 ms max) And cumulative tempo drift remains <= 10 ms after a 200 ms camera stall or any network change event
Screen-Locked Operation and Foreground Recovery
Given the device supports background audio and haptics When the screen locks during an active set Then audio and haptic cues continue with the same tempo and jitter thresholds as foreground (p95 <= 20 ms, p99 <= 35 ms) And upon unlock, the visual progress bar resumes aligned to the next beat with audio-visual skew <= 25 ms When the app returns to foreground after <= 5 minutes in background Then the engine re-synchronizes without an audible jump by applying <= 3% tempo adjustment until within <= 100 ms of target phase within 4 beats
Audio Session Management: Ducking and Interruptions
Given external media is playing When Tempo Coach starts Then external audio is ducked by 40-60% within 150 ms and restored within 150 ms after cues stop Given a system audio interruption (phone/VoIP/assistant) occurs mid-set When the interruption begins Then cueing pauses within 150 ms and the audio session yields without crash When the interruption ends Then cueing resumes on the next bar boundary (<= 2 beats delay) and aligns to target phase within <= 100 ms without a tempo jump Given an audio route change (speaker <-> wired/Bluetooth) When the route changes mid-set Then cueing continues with no missed beats and jitter remains within thresholds; reinit completes within 300 ms
Predictive Scheduling and Smooth CV Reconciliation
Given a synthetic 100 ms main-thread stall is injected once every 20 seconds When Tempo Coach runs at 80 BPM Then no beats are missed or duplicated and cumulative phase error stays <= 25 ms at p95 during and after the stall due to predictive scheduling Given computer-vision reports a phase offset >= 150 ms When reconciliation is applied Then per-beat phase correction is limited to <= 5% of the beat interval and <= 15 ms per beat And the offset reduces to <= 50 ms within 8 beats with no instantaneous jump > 10% of the beat interval
Power-Efficient On-Device Scheduling
Given a 20-minute session at 60 BPM with screen locked and background audio active When running on representative mid-tier devices Then average CPU utilization attributable to the cueing engine is <= 6% And device battery drop is <= 4% over the session And no thermal throttling events are logged by the OS
Cross-Platform Automated Latency Consistency
Given an automated loopback latency harness with timestamped markers When standardized tests run at 40, 60, 90, and 120 BPM for 60 seconds on target iOS and Android devices in CI Then p95 jitter <= 20 ms and p99 <= 35 ms for each device and tempo And cumulative beat count matches expected with 0 missed beats And median inter-beat interval differs by <= 10 ms between platforms for the same tempo And the CI job fails on any threshold breach and artifacts timing traces for analysis

Range Guard

Live safety rails overlay the camera view to mark protocol-safe ranges of motion. Gentle pre‑limit vibrations and color shifts warn before entering risky zones, with stage-aware thresholds for post‑op progressions. Near‑misses and overrides are logged to the clinician, lowering re‑injury risk while preserving patient confidence.

Requirements

Real-time Safety Rails Overlay
"As a patient, I want clear visual rails over my live camera view so that I can stay within safe ranges without stopping my exercise."
Description

Render dynamic, translucent safety rails directly on the live camera feed to delineate protocol-approved ranges of motion for target joints. The overlay binds to detected body landmarks and updates every frame, color-coding zones (safe=green, caution=amber, unsafe=red) with smooth transitions and low-latency performance (target ≥24 FPS, <120 ms end-to-end). Supports portrait/landscape, left/right side selection, and multi-joint exercises. Degrades gracefully when tracking confidence drops (e.g., shows "Recenter" prompt) and resumes automatically. Integrates with MoveMate’s rep counter and session flow; persists per-exercise display preferences.

Acceptance Criteria
Live Overlay Binds to Detected Landmarks
Given the live camera feed is active and target joint landmarks are detected with confidence >= 0.80 When the user moves the joint(s) within protocol range Then translucent safety rails render bound to the landmarks and update every frame with alignment error <= 10 px at 1080p And zones are color-coded as safe=green, caution=amber, unsafe=red per protocol thresholds And color transitions have a transition duration <= 150 ms with no visible flicker
Low-Latency and Frame Rate Performance
Given a device that meets the minimum supported spec and the overlay is enabled When capturing a continuous 60-second exercise trial Then the average frame rate is >= 24 FPS and the 95th percentile end-to-end latency is <= 120 ms And dropped frames are <= 2% and no single frame exceeds 200 ms end-to-end latency
Orientation and Side Selection Behavior
Given the overlay is active When the device rotates between portrait and landscape or the user toggles left/right side selection Then the rails reorient and rebind within 300 ms without misalignment or clipping And color zones and labels remain correctly mirrored, legible, and within the safe display area
Multi-Joint Exercise Support
Given a protocol that targets two or more joints simultaneously When the user performs the movement Then distinct rails render for each targeted joint with unique labels and non-overlapping interactive areas (>= 24 px separation) And all targeted joint overlays update every frame and maintain correct relative positioning across the motion
Graceful Degradation and Auto-Resume
Given tracking confidence for any required landmark drops below 0.60 for > 500 ms during an active exercise When this occurs Then the rails fade out within 200 ms and a centered "Recenter" prompt appears And when confidence returns to >= 0.75 for >= 500 ms the rails automatically resume and the prompt clears And 'tracking_degraded' and 'tracking_resumed' events are logged with timestamps
Rep Counter and Session Flow Integration
Given an exercise session with the rep counter running and the overlay enabled When reps are detected Then rep detection remains within ±1 rep of an overlay-off baseline over 30 consecutive reps And each rep event stores the terminal zone state (safe/caution/unsafe) in the session data model And starting, pausing, or ending the exercise does not reset overlay state or user preferences
Per-Exercise Display Preferences Persistence
Given the user changes overlay settings (visibility, opacity, side selection) during an exercise When the same exercise is launched again by the same user on the same device Then the last-used overlay settings for that exercise are applied automatically And the user can restore defaults via a single action and preference changes persist within 1 second of update
Pre-limit Multimodal Warnings
"As a patient, I want subtle warnings just before I exceed my safe range so that I can self-correct without breaking my flow."
Description

Provide gentle, configurable pre-limit cues before the motion enters risky ranges, including subtle haptics (phone/watch), color shifts of the rails, and optional soft audio tones. Pre-limit thresholds are expressed as a percentage of the configured limit with hysteresis to prevent flicker, debouncing to avoid cue spam, and respect for system accessibility settings (e.g., reduced motion, vibration off). Works offline and fails safe (no cues) if device haptics are unavailable. Clinician and patient can enable/disable specific modalities per exercise as allowed by clinic policy.

Acceptance Criteria
Threshold-based cue activation with hysteresis
Given an exercise with stage-specific limit L, pre-limit percent P=85%, and hysteresis H=5% When the measured motion value crosses upward through P*L Then the rails color shifts to the pre-limit state and enabled haptic (phone/watch) and optional soft audio fire within 150 ms And cues remain active while motion ≥ (P - H)*L And cues deactivate only after motion < (P - H)*L for at least 200 ms
Debounced cue emissions to prevent spam
Given a debounce window D=3 seconds and pre-limit cues currently active When the motion exits and re-enters the pre-limit band within D without dropping below (P - H)*L for ≥200 ms Then haptic and audio cues trigger at most once per D And visual pre-limit indication may persist continuously without additional activations And total cue activation events do not exceed 1 per D during rapid oscillations around the threshold
Per-exercise modality controls with clinic policy precedence
Given clinic policy forbids Audio for a specific exercise When a patient attempts to enable Audio in exercise settings Then the Audio control is disabled or shows as locked and Audio cues never play during that exercise Given clinic policy allows all modalities When clinician or patient (per policy) enables/disables Visual, Haptic (phone/watch separately), or Audio for the exercise Then the chosen settings persist across sessions and offline and are enforced during cue emission
Respect system accessibility and device settings
Given OS Accessibility Reduce Motion is enabled When pre-limit is entered Then the rails apply a static color change without animations or flashing Given system vibrations are disabled or haptics permission is denied Then no haptic cues are emitted regardless of in-app settings Given the device is in Silent or Do Not Disturb mode Then soft audio tones are suppressed and never override system settings
Offline operation with cached configuration
Given the device is offline and the exercise configuration (limits, P, H, modality toggles, debounce) is cached locally When a session runs and pre-limit is reached Then pre-limit detection and enabled cues operate fully with no network calls or errors Given the device is offline and configuration is not yet cached When a session is started Then no cues are emitted (fail safe) and the session continues without errors or crashes
Graceful degradation when haptics are unavailable
Given the phone and/or watch lack haptic hardware, are in a mode that disables haptics, or the app lacks haptic permission When pre-limit is reached Then no haptic cues are emitted and the app does not crash or hang And Visual and Audio cues (if enabled and permitted) continue to operate as configured
Stage-aware Protocol Thresholds
"As a clinician, I want stage-based limits that update automatically so that my patients progress safely according to protocol."
Description

Enable clinician-defined, stage-based safe ranges that progress over time (e.g., post-op phases). Each stage specifies per-joint min/max angles, start criteria (date- or milestone-based), minimum duration, and maximum allowed progression per stage. Includes templates for common protocols and per-patient overrides. Upon stage change, thresholds update the overlay and cues automatically and notify the patient. All changes are versioned and auditable to support clinical governance.

Acceptance Criteria
Create Stage Template with Per-Joint Ranges
Given I am a clinician with template permissions When I create a new protocol template with 1+ stages specifying per-joint min and max angles, a start criteria type (date-based or milestone-based), a minimum duration (in days), and a maximum allowed progression for each stage Then the system validates required fields, ensures min < max for all joints, and enforces numeric ranges within joint-appropriate limits And Save is disabled until all validation errors are resolved And upon Save, the template is persisted and appears in the template list within 2 seconds with the correct stage count
Assign Protocol Template to Patient with Overrides
Given a patient record is open and a protocol template is selected When I assign the template and optionally override any stage's per-joint min/max angles, start criteria, minimum duration, or max progression for this patient Then the patient-specific protocol is created without altering the source template And overridden fields are clearly marked and traceable to the patient record And I can revert any overridden field back to the template default in one action
Automatic Stage Transition by Date
Given a patient protocol stage has a date-based start criteria and a minimum duration of N days When the current date/time reaches the scheduled start and the minimum duration of the current stage has elapsed Then the next stage becomes active automatically at the configured time boundary (default midnight local) without user action And the activation timestamp is recorded in the patient’s audit log
Automatic Stage Transition by Milestone
Given a patient protocol stage has a milestone-based start criteria (e.g., "perform 3 sessions with ROM >= 90°") and a minimum duration When the milestone is achieved by verified session data or clinician confirmation and the minimum duration has elapsed Then the next stage becomes active within 1 minute of milestone confirmation And the activation event records the milestone source, timestamp, and actor (system or clinician) in the audit log
Enforce Minimum Duration and Max Allowed Progression
Given a protocol defines per-stage minimum duration and maximum allowed progression (delta) for per-joint angles When a transition to the next stage would exceed any joint’s max allowed progression or occur before the minimum duration Then the transition is blocked and a clear error message identifies the violating joints and limits And a clinician with override permissions can proceed by providing a written justification, which is captured in the audit log and marked as an override
Real-time Overlay/Cues Update and Patient Notification on Stage Change
Given the patient has an active protocol and is using exercise mode When a stage change occurs (automatic or clinician-triggered) Then the range overlay and haptic/visual cues update to the new stage thresholds within 1 second And the patient receives an in-app banner immediately and a push notification within 1 minute (queued if offline) indicating the new stage and summary of changes
Versioning and Audit Trail for Threshold Changes
Given any change to protocol templates, per-patient overrides, or stage activations When the change is saved Then a new immutable version is created capturing before/after values for all affected fields, actor identity, timestamp, patient/template IDs, and reason (required for clinician-initiated changes) And authorized users can view a chronological audit trail and export it to CSV with identical values to the on-screen log
Near-miss and Override Logging
"As a clinician, I want logs of near-misses and overrides so that I can identify risk patterns and adjust the plan."
Description

Capture structured events whenever a movement approaches (near-miss), enters, or persists in an unsafe zone, and when the patient overrides warnings. Log timestamp, exercise ID, joint, measured angle, duration in zone, pre-limit distance, model confidence, and device metadata. Store derived metrics only (no raw video), queue offline, and sync reliably with retry. Surface summaries in the clinician dashboard (frequency, trends, per-exercise heatmaps), support CSV export, and allow configurable clinician alerts when thresholds are exceeded.

Acceptance Criteria
Safety Event Logging (Near-miss, Entry, Persistence)
Given Range Guard thresholds are configured for knee flexion with near_miss_distance_deg=5 and unsafe_angle_deg=90 for exercise E123 in session S1 When the measured knee angle reaches 86 degrees for at least 200 ms without reaching 90 degrees Then exactly one near_miss event is stored with fields: timestamp (ISO 8601 with timezone), exercise_id=E123, joint="knee", measured_angle_deg=86, duration_in_zone_ms>=200, pre_limit_distance_deg=4, model_confidence in [0.0,1.0], device_metadata present When the knee angle first reaches 90 degrees Then exactly one unsafe_entry event is stored for that contiguous episode with duration_in_zone_ms=0 When the knee angle remains at or above 90 degrees for at least 2000 ms Then exactly one unsafe_persist event is stored with duration_in_zone_ms>=2000 And upon exiting the unsafe zone and later re-entering, a new unsafe_entry (and unsafe_persist if applicable) is stored And no duplicate events of the same type are emitted within a single contiguous episode
Override Action Logging
Given an on-screen warning and vibration are active due to an unsafe zone entry during exercise E123 When the patient selects "Override and continue" and confirms within the app Then exactly one override event is stored with fields: timestamp (ISO 8601), exercise_id=E123, joint, measured_angle_deg at confirmation time, duration_in_zone_ms at confirmation time, pre_limit_distance_deg at confirmation time (if applicable), model_confidence, and device metadata And the override event is included in the next sync and visible in clinician summaries
Event Data Integrity and Privacy Constraints
Given safety events are generated during a session When events are written to local storage and prepared for upload Then each event contains non-null fields: timestamp (ISO 8601 with timezone), exercise_id, joint, measured_angle_deg (number), duration_in_zone_ms (integer >=0), pre_limit_distance_deg (number >=0), model_confidence (0.0–1.0 inclusive), device_model, os_version, app_version And measured_angle_deg and pre_limit_distance_deg are expressed in degrees and rounded to at most one decimal place And for near_miss events, pre_limit_distance_deg <= configured near_miss_distance_deg; for unsafe_entry events, duration_in_zone_ms=0 And no raw image or video files are written to the app sandbox or uploaded (verified by absence of media file writes and by network payload inspection containing only derived metrics)
Offline Queueing and Reliable Sync with Retry
Given the device is offline while safety events occur When the user completes the session and force-closes and reopens the app Then all unsynced events remain in a durable queue When connectivity is restored Then the client attempts upload immediately and, on transient failures (network error or HTTP 5xx), retries with exponential backoff and jitter until success or a maximum backoff cap is reached And once the server acknowledges receipt, queued events are marked synced and removed from the queue And no duplicate records appear on the server or dashboard for the same event (verified by unique composite key of timestamp+exercise_id+joint+event_type)
Clinician Dashboard Summaries and Heatmaps
Given a patient has synced safety events across multiple sessions and exercises in the last 30 days When a clinician opens the dashboard and filters by patient and date range Then frequency counts per event_type (near_miss, unsafe_entry, unsafe_persist, override) and per exercise are displayed and equal to the underlying events And a trends chart shows daily or weekly counts over the selected range and matches event totals And a per-exercise heatmap visualizes counts binned by joint angle ranges (e.g., 10-degree bins) over the selected range, and cell values equal the corresponding counts And dashboard updates reflect newly synced events within 5 minutes
CSV Export of Safety Events
Given a clinician selects a patient and date range on the dashboard When they click Export CSV Then a CSV is generated with one row per event and headers including: timestamp, event_type, exercise_id, joint, measured_angle_deg, duration_in_zone_ms, pre_limit_distance_deg, model_confidence, device_model, os_version, app_version And the row counts and field values match the events visible in the dashboard filters And timestamps are in ISO 8601 with timezone; numeric fields use a dot decimal separator; units are degrees and milliseconds And the file contains no raw media references or pixel data
Configurable Clinician Alerts on Threshold Breach
Given a clinician configures alert rules such as: near_miss_count >= 5 in 24 hours per patient; unsafe_persist_total_duration_ms >= 3000 in a single session; override_count >= 1 per session When synced events satisfy any configured rule Then an alert is generated within 10 minutes, displayed in the dashboard Alerts panel, and optionally sent via email/push if enabled And the alert includes patient identifier, exercise, rule name, observed value, threshold, time window, and a link to details And no more than one alert per rule per patient is generated within a configured suppression window (e.g., 24 hours) And changing a rule affects only future evaluations
Guided Calibration & Environment Check
"As a patient, I want a quick calibration that ensures the app reads my movement accurately so that I can trust the safety cues."
Description

Provide a short guided setup before first use and as needed: verify camera distance, framing, and lighting; collect a neutral reference pose and quick end-range samples; validate tracking confidence against a threshold and prompt readjustment if needed. Save per-patient, per-exercise calibration data (including side selection) to improve angle estimation robustness. Block session start if calibration fails, and offer a fast re-check in subsequent sessions to keep accuracy high.

Acceptance Criteria
First-Time Environment Setup Passes
Given a first-time user launches Guided Calibration for an exercise When the camera preview is open and auto-checks run for 5 seconds Then the estimated user-to-camera distance is between 1.5 m and 3.0 m And at least 90% of required joints are within frame for 3 consecutive seconds And the lighting quality score is >= 0.8 with no severe backlight detected And device motion relative to the scene is < 5 cm/s during the check
Neutral Reference Pose Captured and Validated
Given the user is prompted to assume a neutral reference pose When the user holds the pose for at least 3 seconds Then joint-angle variance for target joints is <= 3° over the capture window And tracking confidence for target joints is >= 0.85 And a baseline pose snapshot with timestamp is saved to the patient–exercise record
End-Range Samples and Side Selection Completed
Given the exercise requires side selection and end-range capture When the user selects Left/Right (or affected side) and performs guided reaches And at least 2 valid end-range attempts are captured per required direction Then maximum safe angles per direction are recorded with confidence >= 0.85 And any pre-limit warnings during capture are noted And side selection and per-direction end-range data are saved to the patient–exercise record
Tracking Confidence Gate and Readjustment Prompts
Given auto-tracking runs during calibration steps When average tracking confidence for target joints drops below 0.85 for >= 1.0 second Then the system displays targeted prompts to adjust distance, framing, or lighting And a retry is initiated upon user confirmation and re-tested automatically And after 2 failed retries, the calibration result is set to Failed with reason codes
Block Session Start on Calibration Failure
Given the latest calibration status for the exercise is Failed When the user attempts to start the exercise session Then the Start action is blocked and a clear message explains the failure criteria not met And only Retry Calibration and Exit options are available And an audit log entry is created with timestamp, failure reasons, and user actions
Persisted Calibration Data per Patient per Exercise
Given calibration completes successfully When the save operation occurs Then the system stores per patient and exercise: neutral baseline angles, end-range angles per direction, side selection, device model, camera orientation, distance estimate, lighting score, tracking thresholds, and timestamps And the data is retrievable by clinician dashboard and next-session re-check APIs And each new calibration creates a new version without overwriting prior entries
Fast Re-Check on Subsequent Sessions
Given a returning user starts the same exercise with prior successful calibration When the quick re-check runs Then environment and confidence checks complete in <= 20 seconds without full walkthrough if thresholds are met And the session proceeds automatically when distance, framing, lighting, and confidence meet configured thresholds And if any check fails, only failed steps are re-run; otherwise full calibration is invoked
Privacy-first On-device Processing
"As a patient, I want my movement data handled privately on my device so that I feel safe using Range Guard."
Description

Perform pose estimation and range calculations on-device; do not store raw video or images. Persist only derived angles, counts, and event logs, encrypted at rest and in transit. Present clear consent screens detailing what is collected and why, with clinic-configurable policies and patient opt-outs where allowed. Implement data retention windows, secure deletion, and HIPAA/PHIPA-aligned safeguards. Operate offline with later secure sync without exposing sensitive media.

Acceptance Criteria
On-device Pose and Range Computation
Given a user starts a Range Guard exercise session with camera enabled When pose estimation and range-of-motion analysis are performed Then all inference and range calculations execute on-device without invoking any cloud service And no outbound network calls are made for inference or media upload during the session And the session runs successfully while the device is in airplane mode And a network capture shows zero bytes of media transmitted during the session
No Raw Media Storage
Given a live session is running or ends (including unexpected background, crash, or quit) When the app sandbox and system media libraries are inspected Then no raw video or image files exist in persistent storage or caches And no entries are written to the OS photo library or shared media directories And temporary frame buffers are cleared on session end and are not restorable after relaunch
Persist Only Derived Metrics
Given the app needs to save exercise results When data is written to storage Then only derived joint angles, rep counts, timestamps, and event logs are persisted And no raw frames, thumbnails, or per-frame pixel data are persisted And the persisted schema excludes fields capable of reconstructing images And data export endpoints return only the derived fields
Encryption and Access Controls
Given derived data is stored locally Then it is encrypted at rest using a hardware-backed keystore and AES-256 And encryption keys are non-exportable and tied to device unlock And access to local data requires an authenticated user; app auto-locks after 2 minutes of inactivity or on background When data is synced to the backend Then TLS 1.2+ with certificate pinning is used And a packet capture shows only encrypted payloads with no media content
Clear Consent and Configurable Policies
Given a first-time patient login or when policies change When opening Range Guard or starting an exercise Then a consent screen explains collection of angles, counts, and event logs, purpose, retention, and sharing And consent requires explicit opt-in before any persistence or sync occurs And clinic-configurable policy text and per-category opt-outs (where legally allowed) are presented And the app records consent with patient ID, policy version, timestamp, and locale And users can review and revoke consent from settings at any time
Retention Windows and Secure Deletion
Given clinic retention policies are configured per data category When any record exceeds its retention window Then it is securely deleted within 24 hours And deletion removes cryptographic keys or uses OS secure deletion APIs And a deletion audit log entry is created with record type, IDs, timestamp, and actor "system" And revocation of consent triggers immediate secure deletion of affected records
Offline Operation and Deferred Secure Sync
Given the device is offline When a session is run and completed Then counting, Range Guard warnings, and event logging function fully without connectivity And only derived data is queued locally in encrypted storage for later sync When connectivity returns Then queued items sync successfully over TLS without uploading any media And duplicate uploads are prevented via idempotent identifiers
Clinician Controls & Remote Configuration
"As a clinician, I want to configure Range Guard settings remotely so that I can standardize and personalize safety limits at scale."
Description

Allow clinicians to configure Range Guard remotely from the web dashboard: select exercises, set per-joint limits and pre-limit percentages, toggle modalities (haptics, audio, color), choose progression rules, and lock settings. Changes propagate to patient devices within 5 minutes with conflict resolution and an audit trail. Provide role-based permissions, templates for rapid setup, and cohort-level bulk updates to standardize care across patients.

Acceptance Criteria
Remote Configuration Propagation 5 Minutes
Given a clinician with Edit and Publish permissions selects exercises, sets per-joint ROM limits and pre-limit percentages, and toggles modalities (haptics, audio, color) on the web dashboard for a specific patient When the clinician clicks Publish Then the patient device receives the new configuration within 5 minutes of publish time if online, or within 5 minutes of next reconnect if offline And the Range Guard module enforces the new limits and modalities on the next session without requiring an app restart And the dashboard displays a Delivered timestamp per device configuration within the SLA
Progression Rules Execution and Overrides
Given a clinician assigns a progression rule to an exercise (e.g., increase shoulder flexion +5b0 weekly if no near-misses for 3 consecutive sessions) When the rule conditions are met by patient telemetry Then the system schedules and applies the updated limits at the defined cadence without manual intervention And both clinician and patient receive a notification of the change And the rule evaluation inputs and resulting limit changes are recorded in the audit trail When a clinician manually overrides a scheduled rule change Then the override supersedes the rule for the selected duration and the action is logged with actor, timestamp, and previous values
Patient-Side Settings Lock Enforcement
Given a clinician enables Lock settings for a patient's Range Guard configuration When the patient opens the mobile app and navigates to Range Guard settings Then configuration controls are disabled or hidden and cannot be changed by the patient And any attempted modification is blocked and logged with timestamp and user identity And the UI displays a read-only notice that settings are locked by the clinician
Role-Based Permissions Enforcement
Given roles Admin, Clinician, and Assistant are configured with permissions [view, edit, publish, bulk-update, template-manage] When each role attempts actions beyond its assigned permissions Then the system denies the action with a clear error and no changes are persisted And permitted actions succeed without escalation And the default permission matrix is enforced: Admin [all actions]; Clinician [view/edit/publish for assigned patients, apply templates, cohort updates within assigned patients]; Assistant [view assigned patients, create drafts, no publish/bulk] And all permission checks are logged in the audit trail with actor, role, action, resource, and outcome
Template Creation, Versioning, and Application
Given a user with template-manage permission creates a Range Guard template containing exercises, per-joint limits, pre-limit percentages, modalities, and progression rules When the user saves the template as version 1.0 and applies it to a patient Then the patient receives the template configuration per the Propagation SLA And subsequent template edits create a new version and do not alter previously applied patient settings until re-applied And applying a newer template version updates only fields present in the template, preserving patient-specific overrides for fields not in the template And all template create, edit, and apply actions are recorded in the audit trail
Cohort-Level Bulk Updates with Preview, Scheduling, and Rollback
Given a user with bulk-update permission selects a cohort via filters (e.g., surgery type = ACL, stage = Post-Op Week 3) and chooses a template or parameter changes When they open the preview Then the system shows the count and list of affected patients with per-patient diffs And the user can schedule execution for now or a future time with timezone and blackout window options And on execution, 95% of online devices receive updates within 5 minutes; offline devices receive updates within 5 minutes of reconnect And per-patient status (success, queued, failed) and error details are displayed And a one-click rollback restores prior settings for selected patients and is logged
Conflict Resolution and Audit Trail
Given two or more conflicting changes target the same patient's Range Guard settings within overlapping time windows (e.g., manual edit vs. cohort bulk update) When the system processes these changes Then a deterministic resolution policy is applied in this priority: Locked settings > Manual clinician publish > Scheduled progression rule > Template apply > Bulk update And non-winning changes are not applied, and their actors are notified with the reason and reference to the winning change And for each attempted change, an audit entry captures actor, role, source (manual/template/bulk/rule), timestamp, old values, new values, resolution outcome, and affected devices And audit logs are filterable by patient, cohort, actor, action, and date range and exportable to CSV

Ghost Guide

Before the first rep, a translucent ‘ideal path’ silhouette animates over the patient’s live view, showing angle targets and joint alignment to aim for. Patients can trace the ghost for 1–2 warm‑up reps to lock in form, cutting early mistakes and shortening the time to consistent, quality reps.

Requirements

Quick Pose Calibration
"As a patient, I want the app to quickly calibrate my camera and body position so that the ghost overlay lines up with my joints and I can follow it accurately from the first rep."
Description

Automatically calibrates the camera and user pose before the first rep of each assigned exercise. Guides the patient to proper framing and distance with on-screen bounding boxes and prompts, detects lighting and background clutter, and establishes scale and anchoring points from detected keypoints (hips, shoulders) to align the ghost silhouette. Persists per-exercise calibration profiles on-device and triggers lightweight re-calibration when confidence drifts. Outputs camera/device parameters, joint confidence scores, and scale factors consumed by the AR overlay, rep counter, and form error modules to improve overlay alignment and reduce false error flags.

Acceptance Criteria
First-Time Calibration for Assigned Exercise
Given an assigned exercise is started and no valid calibration profile exists for this exercise and device orientation When the camera session begins and the user is detected in frame Then the calibration flow starts within 2 seconds and displays a progress indicator And calibration completes within 8 seconds on 90th percentile devices And the ghost overlay remains hidden until calibration reports status=success And on success, the app surfaces a confirmation state and enables the overlay
Framing and Distance Guidance with Bounding Box
Given the live preview is active When the user’s body height in frame is < 60% or > 80% of the target bounding box Then directional prompts ("Move closer", "Move back", "Shift left", "Shift right") appear within 200 ms And prompts persist until shoulders and hips keypoints are inside the ROI for 2 continuous seconds And a visible checkmark appears when framing is satisfied
Lighting and Background Quality Check
Given scene quality analysis runs during calibration When average keypoint confidence for face/torso is < 0.75 for 30 consecutive frames OR mean luminance is < 15% or > 85% OR clutter score > 0.6 Then the app displays a corrective prompt ("Increase light", "Face the light", "Clear background") within 200 ms And calibration is blocked until quality metrics meet thresholds for 2 continuous seconds or the user taps "Proceed anyway" And if "Proceed anyway" is selected, the output marks qualityFlag=low
Anchoring Points and Scale Establishment
Given hips and shoulders are detected with confidence ≥ 0.80 When scale is computed from shoulder–hip distance and alignment is solved Then mean reprojection error between ghost joints and detected keypoints is ≤ 12 px or ≤ 3% of screen height (whichever is larger) And initial joint angle targets differ by no more than ±3° from measured angles at the calibration pose And the calibration data includes anchor keypoints (hips, shoulders) and scaleFactor
Per-Exercise Calibration Persistence and Retrieval
Given calibration succeeded for exercise X on this device orientation When exercise X is started again within 7 days Then the stored calibration profile loads in < 200 ms and is reused without full calibration if initial keypoint confidence ≥ 0.85 for the first 30 frames And profiles are namespaced per exerciseId and deviceOrientation And a visible "Recalibrate" action allows the user to discard and re-run calibration And per-profile storage footprint is ≤ 50 KB
Confidence Drift Detection and Lightweight Recalibration
Given a calibration profile is active during the exercise When the 45-frame running average keypoint confidence drops below 0.80 OR overlay reprojection error exceeds 18 px for ≥ 1 second Then a non-blocking prompt "Hold for quick recalibration" appears and lightweight recalibration runs at the next rep boundary in ≤ 2 seconds And lightweight recalibration updates scale and anchors without clearing the rep counter or exiting the exercise And auto-recalibration does not trigger more than once per minute unless the error condition persists
Calibration Output Contract and Downstream Impact
Given calibration completes with status success or low When the payload is published to the shared store Then it contains: cameraIntrinsics{fx,fy,cx,cy,distCoeffs}, deviceOrientation, worldToCameraTransform, scaleFactor, anchorKeypoints{hips,shoulders}, jointConfidenceScores per keypoint, qualityScore, timestamp, schemaVersion And AR overlay, rep counter, and form error modules read the payload within 100 ms and log an acknowledgment And on a standard validation set, enabling calibration reduces form-error false positives by ≥ 25% and overlay alignment error by ≥ 30% versus a no-calibration baseline
Ideal Path Model
"As a clinician, I want to define an ideal path with angle targets and tolerances so that patients see exactly what good form looks like and we can measure adherence consistently."
Description

Generates a parametric ‘ideal path’ for each exercise, including joint angle trajectories, alignment constraints, spatial envelopes, tempo targets, and acceptable tolerances by phase. Supports side specificity (left/right), therapist-prescribed modifications (e.g., limited ROM), and anthropometric scaling to the patient. Versioned templates are stored with the exercise plan and synchronized to the device for offline use. Provides machine-readable targets to the AR overlay and form-feedback engines, enabling consistent guidance and uniform scoring across sessions.

Acceptance Criteria
Parametric Ideal Path Generation
Given a prescribed exercise template exists with defined joints and phases When the Ideal Path Model generates the path Then the output includes jointAngleTrajectories[], alignmentConstraints[], spatialEnvelopes[], tempoTargets[], and tolerancesByPhase[] And each element contains explicit units, axes, and coordinateFrame metadata And per-phase tolerances specify angular (deg), positional (cm), and tempo (%) bounds And the payload conforms to JSON schema v1.2 And path generation completes in ≤150 ms on a mid-tier device
Side Specificity Handling
Given an exercise variant is flagged as "left" or "right" When the Ideal Path Model generates the path for each side Then target paths are mirrored across the sagittal plane with angle differences ≤1° and positional differences ≤1 cm between sides And joint labels, handedness flags, and side-specific constraints are correctly assigned And the AR overlay payload includes side=left/right And the scoring engine uses the side-specific targets for evaluation
Therapist Modifications & Limited ROM
Given a therapist applies modifications (e.g., max knee flexion 90°, tempo +10%, widened tolerance) to a plan When the Ideal Path Model generates the path Then modified constraints override defaults only for that plan instance And no joint angle exceeds the specified ROM caps at any phase And tempo and tolerance adjustments propagate to all affected phases And the resulting template records {authorId, timestamp, fieldsChanged} And invalid or conflicting modifications return a validation error with code MOD_CONFLICT and a descriptive message
Anthropometric Scaling to Patient
Given a patient profile contains height and limb segment lengths from calibration When the Ideal Path Model generates the path Then spatial envelopes and target positions scale proportionally to the patient’s segments And angle trajectories remain unscaled but correctly mapped to the patient’s joints And the calibration fit residual (RMS positional error) is ≤2 cm And if any anthropometric metric is missing, defaults are applied and scalingFallback=true is set in metadata
Versioned Template Storage & Immutability
Given a new ideal path template is saved to an exercise plan When the template is persisted Then it is assigned a semantic version (e.g., 1.3.0) and becomes immutable And prior versions remain readable and unchanged And each recorded session references the exact templateVersion used And updating a plan creates a new template version rather than editing in place And an audit log entry is stored with {templateVersion, planId, authorId, timestamp}
Device Sync & Offline Availability
Given a patient is assigned a plan with one or more ideal path templates When the mobile app performs a sync over a stable connection Then all required templates download and verify via checksum within 5 seconds for a 5 MB bundle And templates are stored locally and remain available offline And if the network is unavailable, the app uses the last verified version and queues updates for next connectivity And partial or corrupted downloads are retried up to 3 times and discarded if checksum verification fails
Machine-readable Targets Integration & Uniform Scoring
Given the AR overlay and form-feedback engines request targets When they call the Ideal Path API Then the API returns machine-readable targets conforming to schema v1.2, including phase timing, joint targets, and tolerances And target updates stream at ≥30 Hz with end-to-end on-device latency ≤50 ms And for identical motion replays across sessions/devices using the same template, rep scores differ by ≤2 percentage points and per-joint peak angle errors by ≤2°
Stabilized Ghost Overlay
"As a patient, I want a smooth, clearly visible ghost guide over my live camera view so that I can mirror the silhouette without lag or jitter."
Description

Renders a translucent, anatomically aware silhouette over the live camera view, aligned to the user’s skeleton and anchored to core joints for stability. Dynamically scales and rotates to maintain alignment across device orientations and distances, with colorblind-safe highlights for target angles and joint markers. Maintains low-latency (≤100 ms) and smooth playback (≥30 FPS) on supported devices, with graceful degradation on lower-end hardware. Includes basic occlusion handling and jitter smoothing to prevent flicker. Exposes runtime performance metrics and health signals to inform fallback behaviors.

Acceptance Criteria
Overlay Anchored to Core Joints During Movement
Given a detected user skeleton with pelvis, shoulders, hips, knees, and ankles at confidence ≥0.7 When the user performs 10 squats at varying speeds (0.5–1.5 Hz) Then for ≥95% of frames, the ghost joint markers align within ≤1% of the viewport’s shortest side or ≤3° angular error to the corresponding skeleton joints Given lateral translation up to 0.5 m relative to the camera When tracking the pelvis anchor across frames Then median drift relative to the pelvis is ≤0.7% of the viewport’s shortest side and no frame-to-frame jump exceeds 0.5% of the viewport’s shortest side
Auto-Scale and Rotate Across Orientation and Distance
Given the device is rotated between portrait and landscape during an active session When the device orientation changes Then overlay re-alignment completes within ≤200 ms and, after a 300 ms stabilization window, joint alignment error is ≤1% of the viewport’s shortest side or ≤3° for ≥95% of frames Given the user changes distance from ~1.0 m to ~2.0 m at ~0.5 m/s When auto-scaling occurs Then silhouette scale error is ≤5% relative to measured skeleton limb lengths and aspect ratio distortion is ≤2% Given camera roll tilt within ±10° When tilt changes Then the ghost maintains upright anatomical orientation within ±2° of the device horizon
Low-Latency and Smooth Playback on Supported Devices
Given a supported device under nominal background load When the ghost overlay runs continuously for 60 seconds Then motion-to-photon latency is ≤100 ms at P95, frame rate is ≥30 FPS at P95, dropped frames are ≤2%, and frame time P99 is ≤50 ms Given app performance tracing is enabled When the test session completes Then there are 0 ANRs and no single-frame stall exceeds 100 ms on the render thread
Colorblind-Safe Joint and Angle Highlights
Given deuteranopia, protanopia, and tritanopia simulation modes When rendering neutral, target-met, and target-missed states for angle arcs and joint markers over live video (50–500 lux) Then for each pair of states, CIEDE2000 ΔE ≥ 10 and contrast ratio against background ≥ 3:1, and each state includes a non-color cue (shape, pattern, or animation) distinguishing it Given the user enables High Contrast mode When markers and angle arcs render Then contrast ratio against background is ≥ 4.5:1 for all state indicators
Occlusion Handling and Jitter Smoothing
Given transient occlusions of tracked joints lasting ≤1.0 s (e.g., hand crossing knee) When a joint becomes occluded Then the overlay transitions opacity smoothly (≤20% change per frame), maintains positional continuity with frame-to-frame movement ≤1.0% of the viewport’s shortest side, and resumes full opacity within 300 ms after reappearance without visible snap Given synthetic pose noise injected at σ=2° angular and 1.0% positional (test harness) When smoothing is enabled Then output joint angle RMS jitter ≤0.7° and positional RMS jitter ≤0.4% of the viewport’s shortest side, with no oscillation components above 10 Hz at amplitude >0.2% of the viewport
Runtime Metrics and Automatic Fallbacks
Given runtime health metrics are enabled When the overlay is active Then the system emits 1 Hz samples containing timestamp, FPS, motion-to-photon latency P50/P95, per-joint pose confidence, dropped frame count, occlusion ratio, and smoothing window size via the internal health API Given performance degrades below targets (FPS P95 < 25 or latency P95 > 130 ms) for 2 consecutive samples When degradation is detected Then the system reduces overlay detail (hide angle arcs, reduce marker complexity) within ≤500 ms and logs device model, OS version, and fallback reason; upon 10 consecutive samples within target, it restores full fidelity; if still below targets after 10 s, it switches to markers-only mode and posts a non-blocking UI notice
Trace-to-Learn Warm-up
"As a patient, I want to trace the ghost for the first couple of reps so that I can lock in proper form before the app starts counting and scoring me."
Description

Enables 1–2 guided warm-up reps where the patient traces the ghost path with real-time feedback on adherence (e.g., percentage match, angle indicators, haptic/audio cues). Transitions automatically to normal counting once a ‘lock-in’ threshold is met; optionally extends warm-up if adherence remains low. Supports pause/resume, left/right side switching, and therapist-configured repetition count, tempo, and tolerance levels. Logs adherence metrics and time-to-lock-in for clinician dashboards and patient progress insights.

Acceptance Criteria
Warm-up Ghost Trace Feedback
- Given the therapist has enabled Ghost Guide and configured tolerance and tempo, When the patient enters the warm-up view, Then a translucent ideal-path silhouette with joint angle targets overlays the live camera feed before the first rep. - Given warm-up is active, When the patient moves, Then percent-match is displayed and updated at ≥10 Hz with overlay-to-feedback latency ≤150 ms. - Given configured angle tolerance, When joint deviation exceeds tolerance for ≥300 ms, Then a single haptic pulse and audio cue are emitted (rate-limited to ≥1 s between events). - Given configured tempo, When the patient performs warm-up reps, Then visual/audio timing cues align within ±100 ms of the configured tempo.
Automatic Lock-in and Transition
- Given a lock-in rule of N consecutive reps meeting configured thresholds, When the patient achieves the rule during warm-up, Then the app displays “Form locked” for 1 s, fades the ghost, and automatically switches to normal counting before the next rep. - Given initial warm-up reps = 2 and max extension reps = E, When thresholds are not met within the initial warm-up, Then the app extends warm-up by up to E reps or until lock-in is achieved, whichever occurs first. - When the max warm-up (initial + E) completes without lock-in, Then the app transitions to normal counting and shows “Warm-up complete—keep improving,” and records lock-in status = false. - Then all transition events preserve the current exercise, side, and session context without resetting non-warm-up settings.
Pause/Resume During Warm-up
- When the user taps Pause or the app is backgrounded, Then the current rep is not counted, live feedback is frozen, and timers stop; a Paused banner is shown within 200 ms. - Given a pause ≤5 minutes, When Resume is tapped or the app returns to foreground, Then warm-up state (rep count, side, thresholds, metrics so far) is restored and feedback resumes within 300 ms. - Given a pause >5 minutes or app relaunch, When the user returns, Then the app prompts to Continue or Restart warm-up; choosing Restart begins warm-up from rep 0 and retains prior metrics as a separate attempt. - Then all pause/resume events are timestamped and included in the session log.
Left/Right Side Switching
- Given a side is selected, When warm-up starts, Then the ghost and angle targets are mirrored to match the selected side and the UI displays the active side label. - When the user or protocol switches sides during warm-up, Then the warm-up rep count resets to 0 for the new side, lock-in is evaluated independently per side, and all metrics include a side identifier. - Then side switches are disabled during an in-progress rep; the switch becomes effective at the next rep start.
Therapist Configuration Enforcement
- Given a clinician plan exists, When the patient opens the exercise, Then the app loads and enforces these read-only settings for the patient: initial warm-up reps (1–2), max extension reps (0–5), lock-in consecutive reps (1–3), percent-match threshold (50–99%), angle tolerance (1–15°), and tempo (20–120 BPM or 0.5–6 s/rep). - When any setting is missing from the plan, Then the app uses defaults: initial=2, extension=2, lock-in=2 reps, percent-match=85%, angle tolerance=7°, tempo=60 BPM. - Then all enforcement is auditable: the effective settings are displayed in the warm-up info panel and included in the session log.
Adherence Metrics Logging and Sync
- Given a warm-up rep completes, Then the app records: timestamp, side, rep index, percent-match (mean and peak), per-joint mean angle deviation, time-in-tolerance percentage, cue events count, and pass/fail vs threshold. - Given warm-up start and a lock-in definition of N consecutive qualifying reps, When the Nth consecutive qualifying rep ends, Then the app records time-to-lock-in (ms); if no lock-in occurs, TtLI is null. - Given session end or connectivity is regained, Then all warm-up metrics are synced to the clinician dashboard within 60 s; retries continue with exponential backoff until success. - Given the analytics API, Then logged data include: sessionId, exerciseId, patientId, side, rep-level metrics, and lock-in summary.
Clinician Ghost Settings
"As a clinician, I want to customize the ghost guide’s targets and tolerances for each patient so that the guidance matches their therapy plan and limitations."
Description

Provides clinician-facing controls to configure ghost guide parameters per exercise: target angles by phase, tolerances, tempo, number of warm-up reps, focal joints to emphasize, and safety constraints (e.g., max knee flexion). Includes a preview/simulation mode to review the overlay against sample poses, template libraries for common PT exercises, and per-patient overrides. Supports assignment via the plan builder, audit trail of changes, and export/import of templates across clinics. Integrates with MoveMate’s existing prescription and analytics modules.

Acceptance Criteria
Save Exercise-Level Ghost Parameters
Given a clinician is editing Ghost Settings for an exercise in the clinician console When they set angle targets by phase, tolerances (± degrees), tempo (seconds per phase), warm-up reps count, focal joints, and safety constraints and click Save Then the system validates: angles within 0–180°, tolerances within 1–30°, tempo within 0.2–10.0 s/phase, warm-up reps within 0–5, focal joints from supported set, and safety max >= highest target angle for that joint And invalid fields are highlighted with messages and the save is blocked until corrected And on success the settings are persisted with a timestamp and success confirmation is shown And new assignments of this exercise default to these saved settings
Live Preview/Simulation of Ghost Overlay
Given a clinician opens Preview mode for a configured exercise When they click Play Then a translucent ideal-path silhouette animates for 2 sample reps with angle targets and tolerance bands visible And parameter edits (angles, tolerances, tempo, focal joints) update the preview within 500 ms of change And controls allow Pause/Resume and scrub through frames; displayed tempo matches configured tempo within ±5% And focal joints are visually emphasized; angle readouts reflect the latest configuration
Template Library Management and Versioning
Given a clinician has Template Library access When they create or edit a Ghost Settings template Then required fields must be provided (name, targeted body region, at least one joint target) and names must be unique within a clinic And saving creates a new version (vN+1) with required change note; previous versions remain read-only And Publish/Unpublish toggles template availability in Plan Builder And permissions restrict create/edit/publish to users with Template:Edit; others can view only And search/filter by name, tags, body region returns correct results
Per-Patient Overrides in Plan Builder
Given a clinician is assigning an exercise to a patient in Plan Builder When they apply a Ghost Settings template and override specific fields (e.g., tolerance, warm-up reps) Then overridden fields are clearly marked as Overrides while others inherit from the template And the underlying template remains unchanged And the clinician can reset any field to Inherit to remove the override And the patient app receives and applies the overrides within 60 seconds of save And the override is recorded with patient, exercise, field names, old/new values
Comprehensive Audit Trail for Ghost Settings
Given any create, update, publish/unpublish, override, export, or import action occurs on Ghost Settings or templates When the action is committed Then an immutable audit record is created capturing actor, timestamp (UTC), entity type/id, action, before/after values, change note (if provided), and source app version And authorized users can view chronological history and compare any two versions with field-level diffs And reverting to a prior template version creates a new version containing the prior values and a mandatory reversion note
Cross-Clinic Template Export/Import
Given a clinician selects one or more templates for export When they export Then the system generates a signed JSON file containing schemaVersion, clinicId, template definitions, tags, and media references, excluding any patient PII And when importing that file in a destination clinic, the system validates signature and schema; on conflicts the user can Keep Both (suffixing name) or Map to Existing template And a dry-run shows counts of to-create, to-update, and to-skip; no changes occur until confirmed And on success, imported templates are created with proper ownership, audit records are written, and publish state mirrors source only if the importing user has publish permission
Safety Constraint Enforcement During Patient Session
Given a safety constraint (e.g., max knee flexion 110°) is configured for an assigned exercise When the patient performs reps and the CV estimate exceeds the max by more than a 3° tolerance for over 200 ms Then the app provides immediate visual and haptic warning and does not count the rep And the ghost overlay caps the target visualization at the configured safe limit And a safety event is logged with timestamp, joint, and peak angle for clinician review and analytics
On-device Safety & Privacy Guardrails
"As a privacy-conscious patient, I want the ghost guide to work on my device without uploading my video so that my sessions remain private and safe."
Description

Runs pose inference and ghost rendering fully on-device; no raw video is stored or uploaded. Shows a clear camera-in-use indicator and consent gating for first use. Detects low-light, obstruction, and low-confidence tracking states to pause guidance and prompt repositioning, preventing unsafe form. Enforces high-contrast visuals for accessibility, monitors thermals/battery to adapt frame rate, and logs only aggregated adherence metrics compliant with HIPAA/GDPR. Provides fallback to text/image cues when AR overlay quality cannot be maintained.

Acceptance Criteria
On‑Device Pose Inference and Ghost Rendering (No Raw Video Storage/Upload)
Given the device is in Airplane Mode and Wi‑Fi is off When the user starts a Ghost Guide session Then pose inference initializes and ghost overlay renders within 3 seconds without any network requests (0 outbound bytes observed via proxy) And no raw video frames or frame buffers are written to persistent storage (filesystem monitor shows 0 writes to media/temp video paths during session) And stopping the session clears any in‑memory frame buffers within 100 ms And repeating the test with network available still results in 0 outbound requests containing image/video payloads
Camera Indicator and First‑Use Consent Gating
Given it is the first time accessing camera features When the user attempts to start a Ghost Guide session Then a consent modal appears describing on‑device processing, no raw video storage/upload, and data use, with actions: Accept and Decline And selecting Accept records a timestamped consent flag locally and proceeds to the camera And selecting Decline prevents camera activation and returns the user to the prior screen with no camera frames initialized And whenever the camera is active, a persistent on‑screen "Camera On" indicator with accessibility label is visible and cannot be dismissed And if OS camera permission is revoked mid‑session, the camera stops within 300 ms and the user is shown a non‑blocking prompt with a link to system settings to re‑enable
Auto‑Pause on Low Light/Obstruction/Low Confidence
Given the session is active with AR overlay When ambient light metric falls below 15 lux OR tracked‑joint visibility drops below 60% OR average pose confidence < 0.60 for ≥ 1.5 seconds Then guidance pauses within 300 ms, rep counting is suspended, the ghost overlay is hidden, and a reposition/lighting prompt is displayed with haptic feedback And a clear banner indicates "Paused for safety" until conditions recover And when metrics recover above thresholds for 2 continuous seconds, guidance resumes automatically and the banner is dismissed And all pauses are counted and surfaced in the session summary as an aggregate number only
High‑Contrast and Accessible Visuals
Given any background luminance in the live camera view When overlays (angle lines, joints, ghost silhouette) are rendered Then a dual‑edge outline ensures a minimum effective contrast ratio of ≥ 4.5:1 against the sampled background at the overlay boundary And a High‑Contrast Mode toggle is available and, when enabled, increases stroke width by ≥ 25% and enforces color‑blind‑safe palette (no single‑channel hue reliance) And all overlay controls/text support system font scaling up to 200% without truncation or overlap And all interactive elements and indicators have screen reader labels and roles announced correctly by VoiceOver/TalkBack
Thermal/Battery‑Aware Frame‑Rate Adaptation
Given a session is running When device thermal state is ≥ Serious (or equivalent platform API) OR estimated surface temperature proxy > 42 °C OR battery < 15% while not charging Then processing frame rate reduces to 15 FPS within 2 seconds, and UI displays a subtle "Battery/Thermal Save" indicator And if the condition persists for 30 seconds, FPS reduces to 10; if persists for 60 seconds, AR overlay is disabled and the app switches to text/image cues while keeping timing guidance And when thermal state returns to Nominal and battery ≥ 25% for 120 seconds, AR overlay ramps up stepwise (10 → 15 → max 30 FPS) without dropping frames And these adaptations are logged only as aggregated counts per session (no timestamps < day granularity)
Aggregated, Privacy‑Compliant Adherence Metrics
Given analytics collection is enabled by user consent When a session completes Then only aggregated fields are stored/transmitted: exercise ID (pseudonymized), session date (UTC day), rep count (integer), total guided time (seconds), number of safety pauses, number of fallback events And no raw video frames, thumbnails, per‑frame pose keypoints, exact timestamps (< day), location data, or device identifiers are stored/transmitted And the analytics identifier is a random app‑scoped ID rotated every 30 days; opting out deletes the ID and suppresses all analytics immediately And a user‑initiated data deletion request removes their aggregated records from device and server within 24 hours
Fallback to Text/Image Cues When AR Overlay Degrades
Given a session is running with AR overlay active When overlay quality score < 0.70 for 3 consecutive seconds due to low confidence, low light, obstruction, or thermal throttling Then the app switches to text/image guidance within 500 ms and displays a banner explaining the fallback And rep counting continues only if pose confidence ≥ 0.50; otherwise rep counting is paused with a clear notice And when overlay quality score ≥ 0.80 for 3 consecutive seconds, AR overlay restores automatically and the banner is dismissed And the session summary reports the number of fallback occurrences as an aggregate value only

Smart Regression

When repeated form flags or fatigue patterns appear, the coach suggests an easier variant, reduced range, or lighter resistance—right in the moment. One‑tap accept switches the overlay and targets; the change is recorded for clinician review. Patients keep momentum without unsafe compensations or frustration.

Requirements

Real-time Form & Fatigue Detection
"As a patient performing home exercises, I want the app to detect when my form deteriorates or fatigue sets in so that I can adjust safely without stopping my session."
Description

Implement an on-device computer-vision pipeline that continuously analyzes joint angles, rep tempo, and compensatory patterns to detect sustained form breakdown or fatigue in under 200 ms per frame. Integrate with the existing rep counter to evaluate rolling windows of reps and apply configurable thresholds per exercise protocol. When thresholds are met, emit a regression-trigger event with the detected issue(s), confidence score, and recent metrics snapshot. Operate offline-first to protect privacy and ensure reliability, with automatic sync of summarized events when connectivity returns. Provide clinician-level configuration for sensitivity, minimum reps observed, and per-exercise trigger rules to align with care plans.

Acceptance Criteria
On-Device Real-Time Analysis Latency
Given a supported device and an active exercise session with the on-device CV pipeline and rep counter enabled When analyzing a continuous 60-second set at 30 FPS Then per-frame end-to-end processing time (pose + features + rule evaluation) has p50 <= 120 ms and p95 <= 200 ms, and frames dropped due to latency <= 2%
Rolling-Window Form Breakdown Detection
Given per-exercise config {window_size_reps, min_reps_observed, form_thresholds, fatigue_thresholds, sensitivity} And rep boundaries emitted by the rep counter When the last window_size_reps contain violations meeting/exceeding thresholds for at least min_reps_observed reps Then a single regression-trigger event is emitted within 1 rep of threshold satisfaction with confidence >= sensitivity and issue_types reflecting detected violations
Regression Trigger Event Payload and Schema
Given a regression-trigger is emitted When the event is serialized Then the payload includes {event_id, patient_id, exercise_id, timestamp_utc, issues[], confidence (0..1), window_summary {rep_count, avg_tempo, tempo_trend, joint_angle_stats, flags_count}, device_info, app_version} And each issues[] item contains {code, description, severity, affected_joints[]} And the payload validates against the defined JSON schema and is persisted to local logs
Offline-First Operation and Deferred Sync
Given the device has no network connectivity When regression events are generated Then events are stored in a durable local queue (encrypted at rest) and marked unsynced, with no raw video or per-frame keypoints stored for sync Given connectivity is restored When sync runs Then summarized events are transmitted within 30 seconds, retries use exponential backoff on failure, and local records are marked synced upon server acknowledgement
Per-Exercise Threshold Application and Rep Counter Integration
Given clinician-configured per-exercise rules are available on-device When the user starts a configured exercise Then active thresholds and window sizes match that exercise’s configuration and rule evaluation aligns to rep counter start/end boundaries And evaluation state resets on exercise change or rest periods > 60 seconds
Clinician Configuration Update Propagation
Given clinician-level settings {sensitivity, min_reps_observed, per-exercise rules} are hosted on the server When the device is online and a newer configuration exists Then the app fetches and applies the new configuration within 5 minutes And subsequent detections use the updated settings without requiring session restart; if offline, the last-known configuration is used until next successful sync
Confidence Debounce and Duplicate Event Suppression
Given a regression event for a specific issue_type was emitted When the same issue persists within debounce_reps (configurable) Then no duplicate regression-trigger is emitted until debounce_reps have elapsed or the issue_type changes And events with computed confidence below the configured sensitivity threshold are not emitted
Contextual Regression Suggestions Engine
"As a patient, I want tailored regression suggestions when I struggle so that I can keep momentum without pain or frustration."
Description

Create a suggestion engine that selects an easier variant, reduced range of motion, or lighter resistance based on detected issues, patient profile, clinician protocol constraints, and available equipment. Rank 1–3 options using rules plus learned heuristics, and include a concise rationale (e.g., “knee valgus detected—switch to wall-supported squat”). Ensure suggestions respect contraindications, minimum effective dose, and progression ladders. Pull variants and parameters from the exercise library with mappings for cues, target ROM, rep tempo, and resistance levels. Provide safe fallbacks when limited data is available.

Acceptance Criteria
Real-time trigger on repeated form faults
- Given an active exercise session with CV form/fatigue signals and the patient profile and protocol loaded - When the same form fault or fatigue flag occurs ≥2 times within the last 5 reps OR a fatigue threshold is crossed - Then the engine generates regression suggestions within 300 ms of the last qualifying frame - And the suggestions are contextual to the current exercise (same movement family)
Generation and ranking of top 1–3 regression options
- Given the exercise library regression mappings and available equipment for the patient - When a suggestion is generated - Then the engine outputs 1–3 options, each typed as one of [easier_variant, reduced_rom, lighter_resistance] - And options are feasible with the patient's available equipment - And each option includes score_rule, score_heuristic, combined_score, and rank where rank 1 has the highest combined_score - And no two options are duplicates by variant_id + parameters
Enforcement of safety, contraindications, and progression rules
- Given patient contraindications, clinician protocol constraints (including min_effective_dose), and progression ladder - When computing options - Then no option violates any contraindication or protocol constraint - And no option reduces intensity below min_effective_dose - And no option regresses more than 1 step on the progression ladder unless acute_fault_severity >= high - And no option increases external load compared to the current setting
Concise, issue-linked rationale for each suggestion
- Given detected issue codes (e.g., KNEE_VALGUS) and the chosen adjustments - When options are output - Then each option includes rationale_text <= 120 characters referencing the top detected issue and the adjustment - And includes rationale_codes with at least issue_code and rule_id - And if KNEE_VALGUS is detected during a squat, at least one option mentions "knee valgus" and proposes a supported/assisted squat variant if available
Complete parameter mapping from exercise library
- Given library mappings for candidate variants - When an option is output - Then it includes: variant_id, cues[], target_rom[min,max] (deg or % of baseline), rep_tempo (e.g., 3-1-2), resistance_level (absolute or band color), expected_reps - And all values match library definitions and pass schema validation (no null/empty fields) - And units are explicit and consistent across options
Safe fallback behavior under limited or low-confidence data
- Given CV_confidence < 0.5 OR equipment data unknown OR library mapping missing - When a suggestion is requested - Then the engine returns exactly one safe fallback option from the safety table (e.g., wall-supported/bodyweight/assisted pattern) - And sets fallback=true and reason="limited_data" - And if constraints exclude all options, returns no options with error_code="CONSTRAINTS_EXCLUDE_ALL" and a human-readable advisory
One-tap accept: apply change and record for clinician review
- Given a user accepts an option (rank 1–3) - When the accept event is received - Then the engine emits a change_set including before/after parameters, option_id, rank, rationale, timestamp, patient_id, exercise_id, session_id, correlation_id - And persists the change to the clinician review queue within 1 second with idempotency (stable change_id) - And updates in-session targets (overlay ROM, tempo, resistance) within 200 ms
One‑Tap Accept & Instant Overlay Switch
"As a patient mid-set, I want to accept a suggested regression with one tap so that I don’t break my flow or navigate menus."
Description

Display a non-intrusive, in-session prompt with a single primary action to accept the suggested regression. On accept, switch the visual overlay, coaching cues, targets, and rep counter parameters without resetting the session or losing progress. Provide subtle haptic/voice confirmation and a dismiss/decline option. Ensure the UI is thumb-reachable, large enough for motion contexts, and accessible (WCAG AA). Maintain state continuity for analytics and logging, and allow quick revert to the prior target when conditions improve.

Acceptance Criteria
Instant Overlay Switch on Accept
Given an in-session regression suggestion is displayed during an active set When the user taps the primary Accept button Then the exercise overlay, coaching cues, targets, and rep counter parameters switch within 300 ms And the session ID remains unchanged And elapsed session time, total completed reps, and current set number are preserved And no camera/sensor stream is restarted or reinitialized
Rep Counter Continuity and Parameter Update
Given the rep counter is active on the current exercise variant When the user accepts the regression Then the pre-accept total rep count remains unchanged And the next rep is evaluated using the new thresholds (range of motion, tempo, resistance) starting at the next rep boundary And the session summary separates counts before and after the regression with timestamps
Haptic and Voice Confirmation
Given system haptics are enabled When the user accepts the regression Then a single light haptic is emitted within 200 ms of the tap And if voice feedback is enabled and the device is not in silent mode, a TTS message "Regression applied" plays within 1 s And screen readers announce "Regression applied" and the new variant name immediately
Dismiss/Decline and Rate Limiting
Given a regression suggestion is displayed When the user taps Dismiss or Decline Then no changes are applied to overlay, cues, targets, or counters And the same suggestion will not reappear for at least 120 seconds or 10 subsequent reps, whichever occurs first And no more than 3 regression prompts are shown per 10-minute session
Thumb-Reachable Tap Targets
Given the app is in portrait mode on a smartphone When the regression prompt appears Then the primary Accept button is within the lower 40% of the screen and respects safe areas And all tappable controls in the prompt have a minimum target size of 44x44 pt (iOS) or 48x48 dp (Android) And the prompt does not obstruct the exercise focus region by more than 20% of its area
WCAG AA Accessibility for Prompt
Rule: All prompt text meets WCAG AA contrast ≥ 4.5:1 (normal text) and ≥ 3:1 (large text) Rule: All actionable elements expose accessible names and roles to screen readers Rule: Focus order is logical; the Accept button receives initial focus when the prompt opens Rule: The prompt is operable via screen reader gestures and external keyboard (Tab/Enter) where supported Rule: The prompt respects system text scaling up to 200% without truncation or overlap; no information is conveyed by color alone
State Continuity, Logging, and Quick Revert
Given a regression was accepted Then an analytics event is recorded with previous_variant_id, new_variant_id, timestamp, trigger_reason, and session_id And the revert control is available within one tap from the prompt or overflow menu When the user triggers revert Then the previous overlay, cues, targets, and rep counter parameters are restored within 300 ms without resetting session state And a revert event is logged with the same identifiers and cause And if improved-form conditions persist for 5 consecutive reps, a non-intrusive "Resume target" suggestion appears with the same one-tap behavior
Clinician Review Trail & Audit Logging
"As a clinician, I want a clear log of when and why regressions occurred so that I can adjust the plan and document progress."
Description

Record each regression event with timestamp, trigger reason(s), confidence, pre/post exercise variant, parameter changes (ROM, resistance, tempo), patient response (accepted/declined), and immediate outcomes (subsequent form quality). Surface these logs in the clinician dashboard timeline with filtering and export. Tag events to the active care plan and session for context. Ensure data integrity, secure storage, and HIPAA-compliant access controls. Support clinician notes and overrides linked to specific events for longitudinal decision-making.

Acceptance Criteria
Event Capture: Regression Trigger Logged with Full Context
- Given a live exercise session with Smart Regression enabled and a regression suggestion is displayed, When the patient accepts or declines the suggestion, Then a single regression event is persisted with fields: event_id, timestamp (ISO 8601 UTC), patient_id, exercise_id, care_plan_id, session_id, trigger_reasons [list], model_confidence (0.0–1.0), pre_variant, post_variant (null if declined), parameter_changes {rom, resistance, tempo}, patient_response (accepted|declined), decision_latency_ms, immediate_outcome {window_reps, form_quality_score 0–100}, device_context {app_version, offline_flag}. - Given the patient taps Accept, When the change is applied, Then the post_variant and parameter_changes reflect the applied overlay and targets and the event write completes within 2 seconds of the tap under normal connectivity. - Given the device is offline, When a regression event occurs, Then the event is queued locally with the original timestamp and synced within 60 seconds of connectivity restoration without loss or duplication. - Given duplicate taps or repeated suggestions for the same decision within 5 seconds, When events are written, Then idempotency prevents more than one event (matched by decision_id) from being stored.
Timeline Display and Filtering in Clinician Dashboard
- Given a clinician opens a patient’s dashboard timeline, When filter(s) are applied (date range, care_plan_id, session_id, trigger_reason, patient_response, variant-change present), Then only matching regression events are shown sorted by timestamp descending. - Given events are listed, When an event row/card is rendered, Then it displays at minimum: local timestamp, trigger_reasons, model_confidence (rounded to 2 decimals), pre_variant → post_variant, parameter_changes summary (ΔROM/Δresistance/Δtempo), patient_response, and immediate_outcome form_quality_score. - Given a clinician selects an event, When the details panel opens, Then all stored fields and any linked notes/overrides are visible within 300 ms after click for datasets up to 1,000 events. - Given pagination is needed, When the clinician scrolls, Then the next 50 events load within 2 seconds.
Export of Regression Events for Audit
- Given a clinician with access selects Export on the timeline, When format (CSV or JSON) and date range/filters are chosen, Then a file is generated within 5 seconds containing all fields per event including event_id, care_plan_id, session_id, and any notes/overrides. - Given CSV export, When the file is opened, Then columns use consistent headers, values are normalized (ISO timestamps, numeric confidences 0–1, units included for ROM/resistance/tempo), and row counts match the filtered timeline. - Given an export is performed, When auditing the system, Then an access log entry records user_id, patient_id, timestamp, filter summary, and file format.
Access Control and HIPAA Security for Audit Logs
- Given a user is not assigned to the patient’s care team with Clinician role, When they attempt to view or export regression events, Then access is denied (HTTP 403 or equivalent) and the attempt is logged. - Given an authorized clinician is authenticated, When they access regression logs, Then all data are transmitted over TLS 1.2+ and retrieved from storage encrypted at rest (AES-256 or equivalent), verified by security configuration tests. - Given a signed-in clinician is inactive, When 15 minutes elapse, Then the session is locked and re-authentication is required before viewing logs. - Given any regression event or export is accessed, When reviewing audit logs, Then an entry exists with user_id, patient_id, action, timestamp, and outcome.
Data Integrity and Immutability of Audit Trail
- Given a stored regression event, When a modification is attempted, Then the original record remains immutable; any change is recorded as a new append-only revision linked by event_id with revision metadata. - Given daily integrity verification runs, When hashes/checksums are validated, Then 100% of events pass integrity checks or an alert is raised within 5 minutes identifying the affected event_ids. - Given API or database access, When querying an event, Then a version/ETag and integrity hash are returned to support tamper-evidence.
Clinician Notes and Overrides Linked to Events
- Given a clinician views an event, When they add a note, Then the note is saved with author_id, timestamp, and content and appears immediately on the event; edits create a new version preserving prior content; deletions are soft-deletes with reason. - Given a clinician determines the regression suggestion was incorrect, When they mark an override, Then the event displays override_status (overridden) with reason and optional updated guidance, and the override is included in exports and timelines. - Given notes or overrides exist, When filters are applied for “overridden only” or “has notes,” Then the timeline returns only matching events.
Care Plan and Session Tagging and Cross-Linking
- Given a regression event occurs during an active session under an active care plan, When the event is stored, Then it is linked to the correct session_id and care_plan_id; clicking either link from the event navigates to the corresponding session or care plan view. - Given no active care plan is present at the event timestamp, When the event is stored, Then it is flagged unassigned and appears in an “Assignment Needed” queue for clinicians. - Given a care plan is revised after the event, When viewing the event, Then the link references the plan version effective at the event timestamp, preserving historical context. - Given timeline filters by care plan or session, When applied, Then only events with matching ids are returned.
Safety Guardrails & Contraindication Rules
"As a clinician, I want the system to enforce safety rules during automatic regressions so that patients don’t perform contraindicated movements."
Description

Implement a rule layer that validates every suggested regression against patient-specific contraindications, surgical protocols, pain flags, and clinician-defined boundaries. Enforce hard limits (e.g., ROM caps, load ceilings, movement bans) and soft preferences (e.g., favor isometrics early post-op). If no safe regression exists, pause with safety guidance and prompt to stop or rest. Provide an admin UI for clinicians to predefine allowed regressions and escalation paths per exercise. Log any guardrail denials for visibility.

Acceptance Criteria
Block Regression That Exceeds Post-Op ROM Cap
Given a patient has a clinician-defined knee flexion ROM cap of 90° And Smart Regression is triggered during bodyweight squats When a candidate regression requires > 90° knee flexion Then the system suppresses the suggestion and it is not shown as selectable And a safety banner appears within 200 ms stating "Exceeds ROM cap (90°)" And an audit event is recorded with patientId, exerciseId, ruleType="ROM_CAP", configuredThreshold=90, candidateRequirementDegrees, suggestionId, timestamp, outcome="DENIED"
Enforce Load Ceiling and Movement Bans on Suggested Regression
Given a patient has a load ceiling of 10 kg and a movement ban of "hip adduction" And the current exercise uses resistance equipment tracked by the app When Smart Regression evaluates a candidate variant Then if estimatedLoad > 10 kg OR variant includes a banned movement, the variant is excluded from suggestions And the next safest allowed variant under the ceiling is ranked first And if no allowed variant exists, the Safety Pause flow is invoked And an audit event is recorded with ruleType="LOAD_CEILING" or "MOVEMENT_BAN", configuredThresholds, violationDetails, suggestionId, outcome
Honor Pain Flag Threshold During Regression
Given the patient flags pain >= 5/10 during the set And the painful region is tagged (e.g., left knee) When Smart Regression evaluates candidate variants Then any variant that increases load on the flagged region is excluded And the top suggestion favors isometric or reduced-ROM options when available And one-tap accept is disabled for any candidate violating pain rules And a tooltip appears within 300 ms explaining "Pain flag active — gentle options shown"
Respect Soft Preference to Favor Isometrics Early Post-Op
Given soft preference "favor isometrics" is enabled for weeks 0–4 post-op And at least one isometric variant passes all hard guardrails When a regression is triggered Then the isometric variant is ranked position 1 in the suggestion list And non-isometric variants appear below it And if no isometric is available, the system logs preference_unavailable and proceeds with the safest non-isometric option
No Safe Regression Triggers Safety Pause and Guidance
Given all candidate regressions are eliminated by hard guardrails When regression is triggered Then the session pauses and a modal appears within 300 ms with actions: "Pause set", "End session", "Contact clinician" And on-screen safety guidance instructs the patient to stop and rest And one-tap accept is disabled until an action is taken And a clinician alert appears on the patient's dashboard within 1 minute
Clinician Admin UI to Define Guardrails and Escalation Paths
Given a clinician opens the Safety Guardrails editor for a patient's exercise When they configure ROM caps (per joint in degrees), load ceiling (kg or %BW), movement bans (taxonomy), allowed regression variants, soft preferences, and escalation path order Then the UI validates entries and blocks save if constraints produce an empty escalation path, highlighting conflicting rules And the clinician can preview the first 3 fallback steps generated from the rules And on save, a new rule set version is created with versionId, author, timestamp, and change summary And the changes become active for the next new session; if "Apply now" is toggled, they take effect within 10 seconds for the current session
Audit Log Captures Guardrail Denials and Rationale
Given any suggestion is denied by guardrails or a Safety Pause is invoked When the event occurs Then a log entry is created with fields: patientId, sessionId, exerciseId, suggestionId, ruleId, ruleType, configuredThresholds, violationDetails, decision ("DENIED"|"PAUSED"), timestamp (UTC), and clientLatencyMs And entries are viewable in the clinician dashboard, filterable by patient and date range, and exportable as CSV And no video frames are stored in the log
Post-Regression Recovery & Progression Nudges
"As a patient, I want the app to nudge me back toward my original targets when I’m ready so that I keep progressing without overexertion."
Description

After a regression is accepted, continuously monitor form quality and fatigue signals to detect recovery. When stability criteria are met, prompt a return to the original target or an intermediate step-up. Collect a 2–3 second patient check-in (e.g., pain better/same/worse) to inform next steps. Update the session plan and future suggestions using these responses, and summarize outcomes to the clinician. Ensure nudges are rate-limited and context-aware to avoid interruption fatigue.

Acceptance Criteria
Detecting Recovery After Accepted Regression
Given a regression was accepted in the current session, When at least 8 subsequent reps are recorded for the regressed variant, Then the system must compute stability metrics on a sliding window of the last 12 reps. Given stability computation is active, When in the last 12 reps there are ≤1 form flags, ≤1 fatigue flags, ≥80% of reps within target ROM band, and rep tempo coefficient of variation ≤15%, Then mark the patient as Recovered for this exercise. Given Recovered is true, When 2 or more new form flags occur within any next 5 consecutive reps, Then revoke Recovered and resume monitoring until criteria are re-met.
Rate-Limited, Context-Aware Progression Nudge Timing
Given Recovered is true, When the user enters a rest window (end-of-set detected or ≥3 seconds since the last rep), Then display a progression nudge within 5 seconds. Given any progression nudge was shown, When timing additional nudges, Then enforce ≥3 minutes between progression nudges and a maximum of 3 progression nudges per session. Given the user is mid-rep, in the first 3 reps of a new set, recording a timed hold, or a progression nudge was shown in the last 60 seconds, Then suppress progression nudges. Given the last two progression nudges were explicitly dismissed, When evaluating further nudges, Then suppress additional progression nudges for 10 minutes or until a ≥10% improvement in ROM consistency is detected versus the prior window.
One-Tap Progression Acceptance Switches Targets and Logs
Given a progression nudge is visible, When the patient taps Accept, Then update the exercise overlay and targets to the suggested level within 1 second. Given Accept was tapped and connectivity is available, When persisting the event, Then record timestamp, previous level, new level, stability metrics, and check-in response, and sync to the clinician timeline within 30 seconds. Given Accept was tapped and connectivity is unavailable, When persisting the event, Then queue the full event locally and sync within 30 seconds of connectivity restoration.
Intermediate Step-Up Recommendation Before Full Return
Given Recovered is true but the original target exceeds current performance by >10% in ROM, load, or tempo, When generating a progression suggestion, Then offer an intermediate step at approximately 50% of the gap toward the original target (rounded to the nearest feasible unit). Given multiple feasible step-ups exist, When presenting the nudge, Then show the recommended intermediate step as primary and the original target as a secondary option. Given the patient accepts the intermediate step, When applying changes, Then update targets accordingly within 1 second and continue monitoring with the same recovery thresholds.
Ultra-Short Patient Check-In Captured and Applied
Given any progression nudge is displayed, When it appears, Then present a 3-option check-in (Pain: Better / Same / Worse) with optional free text up to 100 characters. Given the patient selects a check-in option, When Accept or Dismiss is taken, Then store the check-in with timestamp and link it to the nudge event and current exercise. Given the check-in result is Worse, Then suppress progression nudges for the remainder of the session for that exercise and present maintain/regress guidance; Given Better is selected, Then allow progression; Given Same is selected, Then allow progression but cap the step size to the intermediate level. Given no check-in selection is made within 10 seconds, When the nudge times out, Then record No Response and do not reprompt for at least 2 minutes.
Session Plan Update and Future Suggestions Adaptation
Given Accept or Dismiss was taken on a progression nudge, When the decision is finalized, Then update the current session plan (sets/reps/ROM/load targets) within 2 seconds to reflect the choice. Given the session ends, When computing next-session suggestions, Then incorporate the latest performance metrics and check-in trend across the last 3 sessions to adjust baseline difficulty up or down by one step if three consecutive Better or Worse check-ins are recorded. Given changes were applied, When syncing, Then display the updated plan in the patient view and clinician dashboard within 30 seconds of sync.
Clinician Outcome Summary for Post-Regression Path
Given a session included an accepted regression, When the session completes, Then generate a summary including: time of regression acceptance, time to recovery, number of progression nudges shown, accepted/declined counts, final target reached, and all check-in responses with timestamps. Given the summary is generated, When presented to the clinician, Then it must match the underlying event log for counts (100% match) and timestamps (±1 second). Given the clinician opens the patient’s session summary, When data is requested, Then the summary is available within 2 minutes after session completion on both mobile and web clinician dashboards.

Form Insight

Instant 5‑second replays gain annotated callouts that show exactly what drifted (e.g., knee valgus 12°, trunk tilt 8°) and why it matters in plain language. Brief micro‑lessons reinforce the correction and track which cues resolve the issue fastest, reducing repeat errors and educating patients as they move.

Requirements

Instant 5-second Replay Buffer
"As a patient, I want an instant replay of my last rep so that I can immediately see what I just did and adjust on the next attempt."
Description

Continuously buffers the last 5–7 seconds of video during exercises and exposes an immediate replay when a rep completes or when the user taps a replay affordance. Integrates with the existing CV pipeline to mark the rep start/end, auto-trim the clip, and align the video timeline with detected kinematic events. Supports scrub, pause, and frame-by-frame stepping, with battery-aware capture, offline operation, and graceful degradation on low-end devices. Provides a simple SDK surface for the annotation engine to overlay callouts on the replay. Handles camera permissions, device orientation, front/rear camera, and safe storage/ephemeral memory for privacy.

Acceptance Criteria
Auto Replay on Rep Completion
Given an active exercise session with camera permissions granted and continuous buffering enabled And the CV pipeline emits rep_start and rep_end timestamps for a completed rep When the rep_end event is received Then an immediate replay overlay is presented within 500 ms on baseline devices and within 1500 ms on low-end devices And the clip is auto-trimmed to [rep_start - 0.5 s, rep_end + 0.5 s], bounded by available buffer (min 2 s, max 7 s) And kinematic event markers in the timeline align to visible frames within ±50 ms And ongoing capture and CV processing continue uninterrupted in the background
Manual Replay Affordance and Orientation Handling
Given an active exercise session with continuous buffering When the user taps the Replay button Then the last 5 s of buffered video is displayed within 300 ms on baseline devices and within 1000 ms on low-end devices And if a rep is in progress, the clip ends at tap time and starts at max(tap_time - 5 s, buffer_start) And the replay respects current device orientation and selected camera (front/rear) without aspect distortion And rotating the device during replay re-renders the video to the new orientation within 250 ms And showing/dismissing the replay does not interrupt ongoing buffering or CV
Playback Controls: Scrub, Pause, Frame-Step
Given a visible replay clip When the user drags the scrubber Then the video seeks with end-to-end latency ≤80 ms on baseline devices and ≤200 ms on low-end devices And releasing the scrubber updates the current timestamp and highlights the nearest event marker correctly When the user taps Pause Then playback halts on the current frame within 50 ms and Resume continues from that frame When the user taps Step Forward or Step Back Then exactly one frame advances or rewinds per tap at the clip frame rate (e.g., ~33.3 ms steps at 30 fps) with no skipped or duplicated frames
Battery-Aware Capture and Graceful Degradation
Given an active exercise session When battery level ≤20% and the device is not charging Then capture scales to ≤720p and ≤24 fps automatically, and replay render target scales proportionally And CPU/GPU utilization attributable to buffering and replay averages ≤35% over a 60 s window on baseline devices And CV-derived rep count and event timing remain within 5% of nominal accuracy When thermal state is critical or frame drops exceed 10% over 10 s Then the rolling buffer reduces to 5 s and replay resolution reduces by up to 50% to preserve real-time capture
Offline Operation and Ephemeral Privacy
Given the device has no network connectivity When a replay is requested Then all replay functionality operates without network calls or blocking spinners Given any replay is displayed Then no video frames are written to persistent storage; clips remain in RAM only and are zeroed within 60 s of replay dismissal or app backgrounding (whichever occurs first) And on next app launch after a crash or force-close, the app's temp/sandbox directories contain no files matching the replay buffer pattern
Annotation SDK Surface for Overlays
Given a replay is generated When the annotation engine subscribes to the Replay API Then the SDK fires onReplayReady(clip_id, start_ts, end_ts, frame_rate, event_markers[]) within 100 ms of replay render start And the SDK exposes drawOverlay(layer_id, timestamp, primitives[]) that renders at ≥30 fps with end-to-end latency ≤50 ms on baseline devices And overlays are time-synchronized so a draw at timestamp t maps to the corresponding video frame within ±33 ms And overlays do not persist to the stored clip and are cleared when the replay is dismissed
Camera Permission Handling and Safe Fallback
Given camera permission is not yet granted When the user starts an exercise session Then the app requests camera permission using the platform-standard prompt exactly once per session attempt And if denied or restricted, a non-blocking inline explainer and Retry button are shown within 200 ms, and no camera frames are captured When permission is granted Then continuous buffering begins within 500 ms without requiring an app restart And all subsequent replay features behave as specified
Auto Biomechanical Callouts
"As a patient, I want clear on-screen numbers showing what drifted during my rep so that I know exactly how my form was off."
Description

Detects and overlays quantitative form deviations on the replay (e.g., knee valgus angle, trunk tilt, depth, tempo), including numeric values, directionality, and confidence. Calibrates to user height and camera pose using lightweight on-device estimation to improve angle accuracy without markers. Supports multi-joint tracking, occlusion handling, and thresholds per exercise, with visual styles that ensure legibility on small screens. Exposes a rules/config layer so clinicians can set which metrics are monitored per protocol and defines safe ranges and alerting logic. Works fully on-device with real-time inference and queues results to the replay buffer for synchronized overlays.

Acceptance Criteria
Accurate Metric Overlay with Directionality and Confidence
Given a configured exercise session on a supported device and adequate lighting (≥150 lux) When the user performs a rep detected by the system Then the overlay displays for each monitored metric: numeric value with units, directionality tag, and confidence as a 0–100% value And Then angle metrics have mean absolute error ≤5° versus labeled reference for visible joints; tempo measurements have error ≤5% of true duration And Then unmonitored metrics are not displayed; metrics with confidence <60% appear dimmed and are excluded from alerting And Then overlay updates at ≥24 fps with no more than 1 frame of stutter per 30 seconds
Calibration to User Height and Camera Pose
Given a user profile with entered height and a camera pose within yaw −45° to +45° and pitch −15° to +15° When the on‑device calibration runs due to first‑time setup or camera movement Then calibration completes in ≤2 seconds and estimates camera pose within ±5° and reconciles user height within ±3 cm of the entered value And Then with calibration applied, joint‑angle MAE is reduced by ≥20% versus uncalibrated baseline or achieves MAE ≤5° (whichever is met) for heights 150–200 cm And Then if calibration confidence <60%, the system reverts to defaults, displays an “uncalibrated” state, and suppresses precision‑dependent alerts
Multi‑Joint Tracking and Occlusion Handling
Given bilateral tracking of shoulders, elbows, hips, knees, and ankles When self‑occlusion affects up to 40% of a limb for ≤1.0 s Then joint IDs remain consistent with RMS angle jitter ≤2° and reacquisition occurs within ≤500 ms after reappearance And Then if any joint confidence drops below 50% or occlusion >1.0 s, the last stable value is held, the overlay marks the metric as “low confidence,” and alerts for that joint are suspended until confidence ≥60% for ≥300 ms And Then occlusion‑induced false alerts are 0 in the occlusion test suite
Per‑Exercise Thresholds and Alerting Logic
Given a clinician‑defined protocol specifying enabled metrics, safe ranges, and alert parameters When a metric exceeds its safe range for N=5 consecutive frames or >20% of the rep duration (whichever occurs first) Then the overlay highlights the metric in red, shows the direction arrow, and logs an alert with timestamp and magnitude And Then no alert is emitted while values remain within range; hysteresis prevents re‑alerting until the metric returns within range for ≥300 ms And Then on validation data, out‑of‑range detection achieves precision ≥0.90 and recall ≥0.90 per metric
Clinician Rules/Config Layer Enforcement
Given a clinician edits a protocol to select metrics, safe ranges, and alert rules using the provided schema When the configuration is saved on‑device Then the config validates against the schema, stores with a version identifier, and becomes the active protocol for the next session And Then only enabled metrics are computed and rendered; disabled metrics produce no overlay and incur no compute And Then the active protocol name/version is displayed in session info and logged with each alert event
On‑Device Real‑Time Inference Performance and Offline Operation
Given a supported mid‑tier device (e.g., Pixel 6 or iPhone 12) on battery When a 20‑minute session runs entirely on‑device Then average per‑frame pose inference time ≤30 ms and end‑to‑end overlay latency ≤120 ms at ≥24 fps And Then no network calls occur during inference; all functionality continues offline And Then battery drain ≤8% and no thermal throttling occurs (as indicated by OS performance metrics)
Synchronized Overlays in 5‑Second Replay with Small‑Screen Legibility
Given a 5‑second instant replay is generated from the live inference queue When the clip is played, paused, or scrubbed Then overlay timestamps align to video frames with max drift ≤50 ms and ≥99% of frames in the clip have corresponding overlays And Then on a 375×812 pt viewport, text size ≥12 pt, stroke ≥1 pt, contrast ratio ≥4.5:1, overlays cover ≤15% of the viewport, and labels avoid key joints via dynamic placement And Then UI tap targets for toggling overlays are ≥44×44 pt and the color palette is colorblind‑safe (no red/green ambiguity)
Plain-Language Insight Generator
"As a patient, I want simple explanations of what the numbers mean so that I understand how to fix my form safely."
Description

Maps detected deviations to concise, jargon-free explanations of why the issue matters and what to try next, tailored to the specific exercise and patient protocol. Generates one to two sentences at a 6th–8th grade reading level, optionally augmented by short audio/TTS. Localizes content, supports accessibility (screen reader labels, high-contrast), and avoids alarming phrasing while conveying safety. Pulls from a vetted content library authored by clinicians, with versioning and clinical review workflows. Selects the most relevant explanation based on magnitude, repetition, and patient history, and logs which insight was shown for efficacy tracking.

Acceptance Criteria
Readability, Brevity, and Tone
Given a supported exercise deviation and patient protocol When the Plain-Language Insight Generator produces an insight Then the output contains 1–2 sentences totaling ≤40 words And the Flesch-Kincaid Grade Level is between 6.0 and 8.9 inclusive And the text contains no phrase from the restricted alarming_lexicon And the text includes at least one actionable suggestion from the approved guidance list And numeric magnitudes include units and are rounded appropriately (degrees to nearest whole number; centimeters to one decimal)
Exercise- and Protocol-Tailored Explanation
Given a configured exercise and patient protocol (ROM limits, tempo, contraindications) When an insight is generated Then the explanation references the exercise name and target body region And the suggestion respects protocol constraints and contraindications And if clinician-provided cues exist for this exercise, at least one is echoed or paraphrased And no suggestion instructs actions outside the prescribed ROM or tempo
Relevance Ranking and Deduplication
Given multiple deviations with magnitudes and history from the last 3 sets When selecting an insight to display Then the system computes score = normalizedMagnitude*0.5 + recentRepetitionRate*0.3 + unresolvedHistoryFlag*0.2 And the selected insight has the highest score And the same insight_key is not repeated within the last 3 sets unless unresolvedHistoryFlag = 1 And ties are broken by larger magnitude, then most recent timestamp
Localization and Internationalization
Given the user locale is set (e.g., en-US, es-ES, fr-FR) When an insight is generated Then the text is returned in the selected locale from the content library And numbers, units, and decimal separators follow locale conventions And if a translation is missing, fall back to en-US and log missing_translation with content_key and locale And p95 added latency due to localization is ≤200 ms
Accessibility and Audio/TTS
Given an insight is displayed and accessibility features may be enabled When the UI renders the insight Then all text meets WCAG 2.1 AA contrast ratio ≥4.5:1 And the insight has accessible name, role, and label and is read once by screen readers And all controls for replay, TTS, and more-info are keyboard operable and focus-visible And if audio/TTS is enabled, an audio clip is produced in the user locale within 500 ms at p95 And the speech rate is between 120 and 160 WPM and total duration ≤8 seconds And a synchronized on-screen transcript is available and matches the spoken text
Clinical Content Governance and Versioning
Given content is retrieved from the clinician-authored library When generating an insight Then only content entries with status = "Approved" are eligible And each delivered insight includes content_key and content_version identifiers And all content edits require reviewer_id and approval_timestamp recorded in the audit log And rollback to a prior approved version by content_key is supported and verifiable
Insight Event Logging and Efficacy Tracking
Given an insight is displayed to a patient When the insight_shown event is emitted Then the record contains patient_id (pseudonymous), session_id, exercise_id, deviation_type, magnitude, insight_key, content_version, timestamp (UTC), locale, and tts_used And 99% of events are delivered to analytics within 60 seconds with up to 3 retries And events are retained for ≥18 months and are queryable by patient_id and exercise_id And no plaintext names or free-text PHI appear in the event payload
Micro-Lesson Cue Delivery
"As a patient, I want quick micro-lessons with one clear cue so that I can correct my form without breaking my workout flow."
Description

Delivers brief, interactive micro-lessons (10–20 seconds) immediately after a flagged rep or between sets, using animated illustrations, one actionable cue, and an optional tactile or audio prompt. Provides minimal-interruption UI with dismiss, snooze, and replay controls, and respects therapist-configured frequency caps. Caches lessons for offline use, records completion and engagement, and supports A/B variants per exercise. Integrates with the insight generator to select the cue most likely to help based on the detected deviation and patient profile.

Acceptance Criteria
Immediate Delivery After Flagged Rep
Given a rep is flagged for a form deviation and immediate delivery is enabled When the rep ends Then a micro-lesson starts within 2 seconds And the lesson duration is between 10 and 20 seconds inclusive And exactly one actionable cue is displayed And an animated illustration is visible for the duration of the lesson And if tactile or audio prompts are enabled, exactly one prompt is delivered within 1 second of lesson start and respects system mute/do-not-disturb settings
Between-Set Delivery When Immediate Deferred
Given an exercise set contains at least one flagged rep and delivery timing is configured as between sets When the set completes and the rest period begins Then exactly one micro-lesson is presented within 2 seconds of rest start And if multiple deviations were flagged, the highest-priority cue (per ranking) is shown And no more than one micro-lesson is shown per set
Therapist Frequency Caps Enforcement
Given therapist caps are configured as N per session and M per exercise for micro-lessons When micro-lessons are triggered during a session Then the number presented does not exceed N for the session and M for the exercise And suppressed micro-lessons beyond the cap are not shown and are logged with reason=capped
Minimal-Interruption Controls and Accessibility
Given a micro-lesson is displayed When the user interacts with the UI Then Dismiss, Snooze, and Replay controls are visible, labeled, and actionable within one tap And the lesson overlay covers no more than 30% of the viewport on phones and does not obscure safety-critical camera view or timers And all controls are accessible via VoiceOver/TalkBack with proper focus order and descriptive labels And control activation latency is under 100 ms on a reference device
Offline Caching and Fallback Behavior
Given the device is offline when a micro-lesson is triggered When content is requested Then a cached micro-lesson matching the exercise and deviation is shown within 2 seconds And if no exact match exists, a generic per-exercise offline cue is shown within 2 seconds And all related telemetry (triggered, shown_variant_id, engagement) is queued locally and persisted until sync And queued telemetry is uploaded within 30 seconds of connectivity restoration
Engagement, Completion, and Experiment Metrics
Given any micro-lesson is shown When the user views or interacts Then events are recorded with timestamps and IDs for impression, start, completion, dismiss, snooze, replay count, and prompt delivered (audio/haptic) And completion is marked only if 90%+ of duration is watched or the user taps Done And if A/B variants exist for the exercise, the patient is assigned per experiment config, assignment is sticky for the session, exposure is logged by variant, and outcome metrics (completion, repeat error reduction next set) are attributed to the assigned variant And variant allocation matches configured distribution within ±2% over 1,000 exposures
Insight Generator–Driven Cue Selection and Fallback
Given a deviation is detected (e.g., knee valgus angle) and a patient profile is available When selecting a micro-lesson Then the cue chosen is the top-ranked recommendation from the insight generator for that deviation and profile And the selection includes a rationale/rule ID in the event payload And if the top-ranked cue is unavailable due to locale, platform, or offline constraints, the next eligible candidate is selected and the fallback reason is logged
Cue Efficacy Tracking and Ranking
"As a clinician, I want to see which cues fix issues fastest so that I can focus my guidance on what works for each patient."
Description

Logs each delivered cue with context (exercise, deviation type/magnitude, device, environment), outcome labels (resolved next rep, time-to-fix, residual error), and patient feedback to compute cue efficacy. Ranks cues per patient and globally, adapting future cue selection to those with the fastest and most durable corrections while avoiding over-coaching. Produces clinician-facing summaries of top-performing cues and persistent problem areas, and exposes exportable metrics/APIs for research. Includes safeguards against confounding (warm-up effects, fatigue) via simple randomization and holdouts.

Acceptance Criteria
Cue Event Logging With Context
Given a deviation is detected during an exercise rep and a cue is delivered When the rep completes Then a cue_event record is created with fields: patientId, sessionId, exerciseId, setIndex, repIndex, deviationType, deviationMagnitude.value, deviationMagnitude.unit, deviceType, environment.lighting, environment.cameraAngle, cueId, cueModality, cueLanguage, timestampDelivered, timestampRepEnd, clinicianId (nullable), appVersion Given network connectivity is offline When the cue_event record is created Then it is queued locally and synced within 60 seconds of reconnect without creating duplicate server records Given patient feedback UI is presented When the patient responds within 30 seconds Then feedback.rating and feedback.comment are appended to the same cue_event When no response within 30 seconds Then feedback.status is set to "unanswered" Given a schema validator is in place When a cue_event is ingested server-side Then invalid or missing required fields result in a 4xx error and the client retries up to 3 times before moving the event to a dead-letter queue
Outcome Labeling: Resolution, Time‑to‑Fix, Residual Error
Given a cue_event and the immediately subsequent rep When the subsequent rep’s deviationMagnitude is computed Then resolvedNextRep = true if deviationMagnitude_next <= correctionThreshold for that deviation/exercise; otherwise false Given the next N=3 reps are analyzed after a cue When the first rep meets the correctionThreshold Then timeToFix.reps = number of reps since cue delivery and timeToFix.seconds = elapsed seconds from cue timestamp to the start of the fixing rep Given residual error must be captured When the fixing rep is identified (or the final rep if never fixed) Then residualErrorMagnitude and residualErrorUnit are recorded for that rep Given the session ends before a fix occurs When outcomes are computed Then timeToFix fields are null and resolvedNextRep = false Given durability must be assessed When the 3 reps following the fixing rep remain under threshold Then durableFix = true; otherwise false
Per‑Patient and Global Cue Efficacy Computation
Given accumulated cue_event outcomes exist When computing per-patient-per-exercise-per-deviation metrics per cueId Then the system calculates resolutionRateNextRep, medianTimeToFixReps, medianResidualErrorMagnitude, durableFixRate, and sampleSize n Given sample size considerations When n >= 30 for a cue metric Then a 95% Wilson confidence interval [low, high] is stored; when n < 30, lowConfidence = true is stored Given streaming updates When 10 seconds have elapsed since the last computation or 50 new events have arrived (whichever comes first) Then metrics are recomputed and persisted idempotently Given data quality filters When outcomes originate from holdout or excluded phases per policy Then inclusion flags are respected and excluded data do not update production metrics
Cue Ranking & Adaptive Selection With Exploration
Given a patient is performing an exercise and a deviation is detected When selecting a cue Then candidate cues are ranked by per-patient efficacy if n >= 10 for that cue; otherwise by global efficacy Given an exploration policy epsilon = 0.10 When a cue is to be selected Then with 90% probability choose the highest-ranked eligible cue and with 10% probability choose a random eligible cue Given a 10% randomized holdout policy When in a holdout opportunity Then select a cue uniformly at random from eligible cues and tag the outcome with holdout = true and exclude it from learning updates Given over-coaching guardrails When a cue was delivered within the last 10 seconds or 3 cues have already been issued in the current set Then suppress additional cues and log suppressionReason Given no eligible cue is available When selection executes Then deliver a default generic safety cue and log fallback = true
Clinician Summaries of Top Cues and Problem Areas
Given a clinician opens the Form Insight dashboard When viewing Cue Efficacy for a patient Then a table lists the top 5 cues per exercise/deviation sorted by resolutionRateNextRep and shows n, resolutionRateNextRep, medianTimeToFixReps, durableFixRate, and 95% CI when n >= 30 Given filters for date range, exercise, and deviation type are applied When the clinician applies filters Then the table refreshes within 2 seconds and metrics reflect the filters Given the clinician switches to Global view When viewing aggregated data Then the same metrics are shown across patients with the ability to drill down to patient level within 1 click Given an export is requested When the clinician clicks Export CSV Then a CSV downloads within 10 seconds containing the visible metrics plus schemaVersion and dataQuality flags Given an authenticated API client When requesting GET /api/v1/cue-efficacy with valid parameters Then a 200 response returns paginated JSON matching the export schema
Confounding Controls: Warm‑Up, Fatigue, and Drift Monitoring
Given session phases are tagged When sets are marked as warm-up Then those events are excluded from primary efficacy metrics and labeled phase = "warmup" in exports Given fatigue heuristics exist (e.g., setIndex > 5 or RPE >= 8) When such conditions are met Then events are either excluded from learning or included with fatigue = true and do not update production rankings unless explicitly enabled via config Given a 10% randomized holdout policy When comparing ranked vs holdout outcomes weekly Then the system stores an A/B delta for resolutionRateNextRep with a 95% CI and raises an alert if |delta| > 10 percentage points for 2 consecutive weeks Given configuration changes (thresholds, epsilon) When config is updated Then the effective configVersion is stored on each event and metrics computation respects versioning to prevent cross-version mixing
Clinician Insight Dashboard Integration
"As a clinician, I want annotated replays and trend summaries in my dashboard so that I can quickly review progress and adjust the plan."
Description

Surfaces annotated clips, deviation trends, and cue-response timelines in the existing clinician dashboard. Enables filtering by patient, exercise, and time window; inline playback with overlays; and one-click adjustments to monitored metrics and cue preferences. Provides secure sharing links for patients, supports role-based access, and aligns with existing reporting/export formats. Adds alerts for repeated unresolved deviations and allows clinicians to pin recommended micro-lessons to the patient’s plan.

Acceptance Criteria
Filter and Discover Insights by Patient, Exercise, and Date Range
Given I am on the Clinician Insight Dashboard When I select a patient, one or more exercises, and a date range Then the insights list, trend charts, and totals update to reflect only matching records within 2 seconds (p95) And the active filters are visibly summarized And removing any single filter updates results while preserving the remaining filters And if no records match, a zero-results state with a Clear Filters action is shown And the last-used filters persist for the current user across navigation within the same session
Inline Playback with Annotated Overlays in Dashboard
Given an insight card contains a replay When I click Play Then the 5-second clip plays inline at ≥720p with overlays visible and synchronized within ±100 ms And Play/Pause and Scrub controls respond within 150 ms Given overlays are on When the clip reaches each deviation moment Then numeric callouts (e.g., knee valgus °, trunk tilt °) appear at the correct position and frame Given I toggle overlays off When the clip plays Then no annotations are displayed and playback performance is unaffected Given the clip is not cached When I click Play Then playback begins within 3 seconds (p95) on a 10 Mbps connection
Cue-Response Timeline Visualization and Export Alignment
Given I open the cue-response tab for a selected patient and exercise When insights are available Then a timeline displays cues with timestamps, subsequent deviation changes, and time-to-resolution metrics And I can sort cues by median time-to-resolution and resolution rate Given I export the timeline When I choose CSV or PDF Then the export matches the existing reporting schema (field names, units, timezone) and imports successfully wherever current reports are accepted And values for angles are in degrees with one decimal place Given there are no cues recorded When the timeline loads Then an informative empty state is shown with a link to cue settings
One-Click Adjustments to Monitored Metrics and Cue Preferences
Given I am viewing a patient and exercise detail in the dashboard When I toggle a monitored metric on or off Then a confirmation appears and the setting persists across sessions and devices And the change is applied to subsequent processing for that exercise Given a metric is disabled When the next recording is processed Then that metric is not flagged or shown in overlays for that exercise Given I update cue preferences (enable/disable, priority order) When I save Then the changes apply to subsequent sessions and are reflected in cue-response metadata Given I select Reset to Defaults When I confirm Then default metrics and cue preferences are restored immediately
Secure Patient Sharing Links with Role-Based Access Controls
Given I create a sharing link for a patient When I select role Patient-View Then the link grants read-only access to that patient’s clips, overlays, and micro-lessons only And PHI for other patients is not accessible Given I set an expiration When the expiration time passes Then the link returns an expired state and access is denied Given I revoke a link When the recipient uses the old URL Then access is denied and an audit entry is recorded Given an unauthorized user attempts access When they open the link Then the system responds with HTTP 403 and no PII is leaked Given a link is generated When inspected Then it uses at least 128 bits of entropy
Alerts for Repeated Unresolved Deviations
Given a deviation alert rule is configured (e.g., knee valgus >10° occurring ≥3 times in 7 days) When a patient meets the condition Then an alert appears in the dashboard within 1 hour and includes patient, exercise, metric, frequency, and suggested micro-lessons Given an alert is open When a clinician marks it as resolved or adjusts thresholds Then duplicate alerts for the same condition are suppressed for 7 days unless the condition worsens Given notifications are enabled When an alert is generated Then a single email/push is sent per condition per 24 hours Given the condition is no longer met for 7 consecutive days When the alert policy is evaluated Then the alert auto-archives
Pinning Micro-Lessons to Patient Plan
Given I view recommended micro-lessons for a patient When I click Pin on a micro-lesson Then it appears in the patient’s plan within 5 minutes and is visible in the patient app Given multiple pinned lessons exist When I reorder or unpin Then the new order is saved and reflected to the patient within 5 minutes Given a lesson is pinned When the patient completes or dismisses it Then the dashboard shows completion or dismissal within 10 minutes Given a patient already has 5 pinned lessons When I attempt to pin another Then the UI prevents the action and prompts me to unpin or reorder
On-Device Processing and Privacy Controls
"As a privacy-conscious patient, I want my form analysis to happen on my phone with control over what gets uploaded so that my data stays private."
Description

Performs computer-vision inference and annotation on-device by default, retaining only derived metrics unless explicit consent is given to upload short clips for clinician review. Implements consent flows, data retention windows for replays, and encryption at rest/in transit. Provides user-accessible controls to delete clips/metrics, transparent explanations of data use, and audit logs for clinician access. Includes region-aware storage, HIPAA alignment, and performance/battery budgets with dynamic model scaling to maintain a smooth experience without compromising privacy.

Acceptance Criteria
Default On-Device Inference with Dynamic Scaling
Given a user without prior media-upload consent, When recording and receiving Form Insight feedback, Then all computer-vision inference and annotation execute on-device and no video frames are sent off-device. And Then no outbound network calls are made to media/upload endpoints during inference (verified via network logs). And Then only derived metrics and annotations (angles, timestamps, error types) are persisted. And Then real-time UI maintains ≥24 FPS on reference devices and median per-frame inference latency ≤120 ms. And When battery level drops below 20% or device thermal state is Serious+, Then the app switches to a lighter model within 1 second while maintaining ≥18 FPS and without sending media off-device. And Then accuracy degradation from scaling is ≤3 percentage points on the Form Insight validation suite.
Explicit Consent for Short Clip Upload with Clear Data Use
Given a flagged form error replay is available, When the app offers to share a clip for clinician review, Then the consent screen states purpose, clip length (≤5 seconds), retention window, storage region, who can access, and revocation rights in plain language (≤8th-grade readability). And Then the default is Do not share until the user actively opts in. And Then upload proceeds only after an explicit checkbox and Agree action; soft prompts cannot bypass consent. And Then the consent decision is timestamped, versioned, tied to the clip, and stored securely; a copy is viewable in Settings. And When consent is later revoked, Then pending uploads are canceled immediately and no further clips are uploaded; previously uploaded clips are scheduled for deletion within 24 hours.
Time-Bound Replay Retention and Automatic Deletion
Given instant 5-second replays are stored locally, When no explicit upload consent exists, Then replay files are retained on-device for up to 72 hours and are excluded from device/cloud backups. And Then files exceeding the retention window are securely deleted within 60 seconds of expiry (fsync + overwrite or OS-secure delete). And When a user consents to upload a clip, Then cloud retention is 30 days by default; upon expiry, the clip is permanently deleted within 24 hours. And Then deletion jobs emit auditable events and subsequent fetches for deleted items return 404.
End-to-End Encryption and Key Management
Given any stored replay, uploaded clip, or derived metrics, Then data at rest is encrypted with AES-256-GCM using keys stored in the OS keystore/Keychain. And When data is transmitted, Then TLS 1.2+ with ECDHE and AES-GCM is enforced with certificate pinning to MoveMate endpoints. And Then no media or metrics are transmitted to third parties without explicit consent and contractual safeguards. And Then encryption keys are rotated at least annually or upon suspected compromise; retired keys cannot decrypt new data. And Then HIPAA-aligned safeguards (encryption, access controls, audit logs) are documented and enabled for US-region data.
User-Controlled Deletion of Clips and Metrics
Given a user opens Settings > Privacy > Manage Data, When they tap Delete All Replays, Then all local replay files are securely deleted within 10 seconds with an in-app confirmation. And When they tap Delete Uploaded Clips, Then all cloud clips for the account are queued for deletion immediately and permanently removed within 24 hours; progress status is shown. And When they tap Delete Metrics, Then derived metrics are purged locally and remotely and removed from clinician dashboards within 5 minutes. And Then deleted items no longer appear in the app; API requests for their IDs return 404; corresponding audit log entries record the deletions.
Clinician Access Audit Logging and Patient Visibility
Given a clinician views a patient's uploaded clip or derived metrics, Then an immutable audit entry is recorded with patient ID, clinician ID, action, item type, item ID, timestamp (UTC), purpose-of-use, and source IP/region. And Then the patient can view their audit trail in Settings > Privacy > Access History within 5 minutes of the event. And Then audit logs are retained for ≥6 years and are exportable by administrators for compliance review. And Then unauthorized access attempts are blocked and logged with reason codes.
Region-Aware Storage and Cross-Border Transfer Controls
Given a patient's region is set (e.g., EU, US), When a clip is uploaded with consent, Then it is stored only in a data center/bucket within that region and processed in-region. And Then metadata and derived metrics follow the same residency rules. And Then cross-border transfers are blocked by default; if a transfer is required, explicit per-event consent naming the destination region is obtained before proceeding. And Then any attempt to route data outside the allowed region is prevented and logged; for US-region data, storage and access controls meet HIPAA alignment.

Route Preload

Pre-download upcoming exercise programs, cue packs, and CV models for the day’s visits so sessions launch instantly without signal. Clinicians queue routes in the morning; patients tap once to cache their next workouts. Reduces stalls, saves data, and keeps care moving in patchy-service areas.

Requirements

Clinician Route Queue Builder
"As a clinician, I want to queue today’s patient routes and programs so that all required assets are preloaded before visits and sessions start on time."
Description

Provide clinicians with a dashboard workflow to select today’s patients and associated exercise programs, then generate and submit a preload queue for each device. The system compiles a per-patient asset manifest (exercise programs, cue packs, CV model variants, instructional media) with estimated sizes and download durations, deduplicates shared assets across routes, and allows prioritization (e.g., morning visits first). Integration points include the appointments calendar, program assignment service, and notifications to trigger remote preloads on patient devices when they come online. The UI surfaces per-patient preload status and errors, and supports editing queues during the day with safe, incremental updates.

Acceptance Criteria
Build Queue from Today's Calendar
Given a clinician is authenticated and has access to the appointments calendar When they open the Route Queue Builder for today's date Then the system auto-populates the patient list from today's appointments within 3 seconds for up to 50 appointments And for each listed patient, assigned exercise programs are fetched from the Program Assignment service within 2 seconds per patient And the clinician can add or remove patients and toggle program selections before generating the queue And upon submit, a queue entry is created for each selected patient's registered device
Per-Patient Manifest Generation with Size and ETA
Given a clinician submits the preload queue When the system compiles each patient's manifest Then each manifest includes all required asset types: exercise programs, cue packs, CV model variants, and instructional media And every asset entry contains assetId, type, version, sizeMB (to 0.1 MB), and estimatedDurationSeconds And estimatedDurationSeconds is computed as sizeMB / assumedThroughputMbps with a configurable default of 5 Mbps And each manifest is versioned and persisted so it can be retrieved by device and by clinician
Asset Deduplication Across Queues
Given multiple patients' programs include identical assets (same assetId and version) When the system compiles manifests Then no manifest contains duplicate entries of the same asset for that patient/device And the dashboard displays total estimated download size with and without deduplication, including MB saved And deduplication does not omit any asset required by any patient's program And asset order within each manifest remains unchanged by deduplication
Priority Ordering of Preloads
Given a clinician sets patient-level preload priority (e.g., morning visits first) via drag-and-drop or rule selection When the queue is submitted Then each patient's manifest is tagged with a priority rank And devices honor patient-level priority order when executing preloads once online And changing priority in the dashboard updates the displayed order immediately and propagates to devices within 30 seconds of them coming online
Remote Preload Trigger and Acknowledgment
Given a patient's queue has been submitted and the device push token is available When the patient's device comes online Then a preload notification is sent within 10 seconds And the device acknowledges receipt within 10 seconds of notification delivery or returns an error code And if delivery fails, the system retries with exponential backoff for at least 3 attempts and surfaces the failure in the dashboard
Real-Time Preload Status and Error Surfacing
Given preloads are queued or in progress When the clinician views the dashboard Then each patient shows one of: Not Started, Pending, In Progress (with percent), Completed, or Error And progress updates are reflected in the UI within 5 seconds of a status change And any Error state displays the asset name, error code, and retry count, with a clinician action to Retry or Skip the asset And Completed is only shown when 100% of manifest assets have been downloaded by the device
Safe Incremental Queue Edits
Given an existing queue for a patient When the clinician adds, removes, or reorders assets and saves changes Then only the delta is transmitted to the device; unchanged assets are not re-downloaded And partial downloads of unchanged assets resume rather than restart And removed assets are cancelled on the device if in progress, and are not retried thereafter And the queue revision is versioned, with the latest revision active and prior revisions retained for audit And the dashboard confirms update propagation within 10 seconds
Patient One‑Tap Preload
"As a patient, I want a one‑tap button to cache my next workouts so that I can start exercises instantly even without a good signal."
Description

Add a prominent in‑app control that lets patients cache their next workouts with a single tap. When invoked, the app fetches the latest manifest from the server and downloads all required assets in the background, showing clear progress, remaining size, and estimated time. Defaults to Wi‑Fi only with a user‑controllable cellular toggle, supports pausing/canceling, and sends a local notification upon completion or failure. The feature is configurable to cache the next N sessions, respects accessibility and low‑vision guidelines, and runs reliably via OS background tasks so preloads complete even if the app is not in the foreground.

Acceptance Criteria
One‑Tap Preload with Background Completion and Notifications
Given the user is on the Home screen with at least one upcoming session, When they tap the "Preload Next Workouts" control, Then the app fetches the latest manifest and begins downloading required assets. Then the progress UI displays percent complete (0–100%), remaining size in MB, and ETA in minutes/seconds, updating at least once per second. Then the ETA shown is within ±20% of actual completion time after at least 20% of data has been downloaded. Given the app is backgrounded or the device is locked during an active preload, When OS permits background execution, Then downloads continue and complete without user interaction. Given the preload completes, Then a local notification is delivered within 5 seconds with title "Workouts Ready" and an action to "Open". Given the preload fails, Then a local notification is delivered within 5 seconds with the failure reason and an action to "Retry". Then the progress state persists across app relaunches and resumes from the last downloaded byte.
Wi‑Fi Default with User‑Controlled Cellular Toggle
Given first run and no prior preference, Then "Wi‑Fi only" is enabled by default. Given "Wi‑Fi only" is enabled and the device is on cellular, When the user taps preload, Then the job is queued without transferring asset data over cellular and the UI shows "Waiting for Wi‑Fi". Given the user toggles "Allow cellular data" on, Then subsequent preloads may use cellular and this preference persists across app restarts. Given OS Low Data Mode/Data Saver is active, When "Allow cellular data" is on, Then the app prompts for confirmation before starting and proceeds only if confirmed. Then manifest polling and preference sync use less than 200 KB over cellular when "Wi‑Fi only" is enabled.
Pause, Resume, and Cancel with Safe State and Cleanup
Given a preload is in progress, When the user taps Pause, Then active transfers are paused within 1 second and the UI shows a Paused state. When the user taps Resume, Then downloads continue using HTTP range requests without re‑downloading completed bytes. When the user taps Cancel, Then partially downloaded files are deleted within 2 seconds and used storage is reclaimed, and the UI returns to idle. Given the app is force‑quit during a preload, Then on next launch the UI reflects the prior state (In Progress or Paused) and the user can Resume or Cancel. Then a canceled job does not send a failure notification.
Preloading Next N Sessions with Delta and Deduplication
Given the preload setting for "Next N sessions" is set to N, When the user starts a preload, Then assets for the next N scheduled sessions are selected in chronological order. Then assets already present and valid on device are skipped, and shared assets across sessions are downloaded once. Then asset validity is verified via checksum/ETag before skipping; if stale, the asset is re‑downloaded. Given no upcoming sessions exist, Then tapping preload shows "Nothing to download" and no network transfer occurs. Given sessions are updated server‑side during an active preload, Then the manifest is revalidated and only missing/changed assets are fetched.
Disk Space Management and Failure Handling
Given estimated download size exceeds available free space minus a 5% safety margin or would reduce free space below 200 MB, When the user taps preload, Then the app blocks the start and shows a prompt to free space with the estimated required amount. Given low disk space is encountered mid‑download, Then the app pauses the job, surfaces an in‑app alert with required additional space, and allows Retry after space is freed. Then failure cases are categorized as network_error, space_error, server_error, or canceled, recorded for diagnostics, and included in failure notification text. Given the user cancels, Then no failure notification is sent. Then partial files from failed or canceled jobs are cleaned up within 5 seconds of termination.
Accessibility and Low‑Vision Compliance for Preload Control and Progress
Given Dynamic Type up to Accessibility XXL, Then the preload control label and progress details reflow without truncation or overlap. Then the primary control meets a minimum tap target of 44x44 points and color contrast ratio ≥ 4.5:1 against its background. Then VoiceOver/TalkBack reads the control as "Preload next workouts, button", announces progress as a percentage, remaining size, and ETA, and exposes Pause/Resume/Cancel as accessible actions. Given Reduce Motion is enabled, Then progress updates avoid non‑essential animations while still conveying status via text and subtle haptics. Then progress is conveyed with both text and non‑color cues so status is perceivable under common color‑vision deficiencies.
Offline and Patchy Network Resilience with Auto‑Resume
Given no connectivity at the time of tap, Then the job is enqueued within 1 second and the UI shows "Queued – waiting for network" without errors. Given connectivity returns, Then the job auto‑starts within 10 seconds and resumes from the last verified byte. Given repeated transient failures, Then retries use exponential backoff up to 5 minutes between attempts and preserve partial progress. Given 3 consecutive terminal failures, Then the UI shows a clear failure state with a Retry action and an error code. Then background‑queued jobs start and progress without the app in the foreground, subject to OS constraints, and surface status via notification.
Dependency Bundling & Version Pinning
"As a clinician, I want content and models to be version‑pinned per program so that patient sessions are consistent and reliable across devices."
Description

Create a manifesting system that bundles all dependencies for a route—program definitions, cue packs, media, and device‑specific CV models—and pins them to exact versions and content hashes. The client requests the manifest and performs delta updates when possible to minimize data usage. The server maintains compatibility rules (e.g., program X requires CV model ≥ vY) and serves signed, content‑addressed URLs via CDN. Manifests support atomic updates and rollback to the last known‑good bundle to prevent mid‑session mismatches. Device architecture and OS constraints are factored to fetch the correct model variant.

Acceptance Criteria
Manifest Retrieval & Signature Verification
Given an authenticated client requests the route manifest for a specific visit date When the server responds with the manifest Then the manifest includes bundle_id, bundle_version, created_at, and a complete list of artifacts each with filename, size, and SHA-256 content_hash And the manifest is signed and the client verifies the signature against the pinned public key successfully And all artifact URLs are content-addressed and are signed URLs with an expiry timestamp And the client persists the manifest metadata upon success
Delta Update Application
Given the client has bundle version v1 cached with artifacts A, B, and C And the server publishes manifest v2 where only B has a different content_hash When the client preloads v2 Then it performs zero network GETs for A and C and exactly one successful GET for B And the resulting local cache contains A@v1, B@v2, C@v1 and matches manifest v2 And the client records total bytes downloaded equal to the sum of sizes for changed artifacts only
Compatibility Rules Enforcement
Given program P requires CV model version >= 3.2.0 And the requesting device reports architecture arm64-v8a and OS version constraints When the client requests a manifest for program P Then the server returns a manifest where the resolved CV model satisfies the version constraint and device constraints Or the server returns HTTP 409 with error_code=COMPATIBILITY_VIOLATION and no manifest body
Device/OS-Specific CV Model Variant Selection
Given CV model M has multiple build variants (e.g., arm64-v8a, x86_64) When the client on x86_64 requests the manifest Then the manifest lists only the x86_64 variant artifact for model M And the client downloads only that variant and does not download other variants And the stored artifact path reflects the variant to prevent cross-arch loading
Artifact Integrity Verification
Given an artifact has finished downloading When the client computes its SHA-256 hash Then the hash matches the manifest content_hash exactly And if the hash does not match, the client deletes the artifact, marks the download as failed, and does not advance the active bundle
Atomic Bundle Update and Rollback
Given bundle v1 is active and bundle v2 is being downloaded When any artifact in v2 fails to download or fails integrity verification Then the client keeps v1 as the active bundle and rolls back any partial v2 state And when all v2 artifacts pass verification, the client promotes v2 atomically so subsequent sessions see a consistent bundle And an interrupted update resumes from the last verified artifact without breaking atomicity
Offline Session Launch from Cached Bundle
Given a bundle has been fully cached and verified And the device has no network connectivity When a user launches the session Then the session starts using only local assets without attempting any network requests And time-to-first-frame is under 2 seconds on reference hardware And the manifest version used is logged for auditability
Smart Preload Orchestrator
"As a patient, I want preloading to respect my data, battery, and connectivity so that caching completes reliably without draining my phone."
Description

Implement a background download orchestrator that schedules, prioritizes, and supervises asset transfers with awareness of network type, battery level, charging state, and user data limits. It supports concurrent chunked downloads, resumable transfers, exponential backoff, and integrity checks during streaming. The orchestrator integrates with iOS background tasks/Background App Refresh and Android WorkManager/JobScheduler, honors quiet hours (e.g., overnight windows), and can be remotely nudged via push to start when a device comes online. It emits telemetry on success/failure, throughput, and reasons for deferral to power an operational dashboard and alerting.

Acceptance Criteria
Wi‑Fi Preference and Cellular Budget Enforcement
- Given the device is on Wi‑Fi and there are eligible assets queued, When the orchestrator runs, Then it starts all eligible asset downloads over Wi‑Fi and does not use cellular. - Given the device is on cellular and an asset is not marked cellular_allowed, When the orchestrator evaluates the queue, Then it defers that asset and records deferral reason = "CellularNotAllowed". - Given the device is on cellular and remaining_data_budget_bytes < asset.size_bytes, When the orchestrator evaluates the queue, Then it defers that asset and records deferral reason = "CellularDataLimit". - Given a download started on Wi‑Fi and the device switches to cellular mid-transfer, When the asset is not cellular_allowed or remaining_data_budget_bytes is insufficient, Then the orchestrator pauses within 5 seconds and resumes only when Wi‑Fi is available or budget is replenished. - Given per-asset priorities exist, When both Wi‑Fi and cellular are available, Then the orchestrator prefers Wi‑Fi and never uses cellular concurrently for the same asset.
Battery- and Charging-Aware Transfer Control
- Given battery_level < 20% and the device is not charging, When the orchestrator evaluates work, Then it does not start new downloads and records deferral reason = "BatteryLow". - Given active transfers are running and battery_level drops below 15% while not charging, When the condition is detected, Then the orchestrator pauses all transfers within 5 seconds, persists resume state, and records deferral reason = "BatteryCritical". - Given the device begins charging or battery_level rises to >= 25%, When constraints are re-evaluated, Then the orchestrator resumes paused transfers within 60 seconds. - Given battery optimizations are enabled by the OS, When background execution is limited due to power saver mode, Then the orchestrator defers and records deferral reason = "OSBackgroundDenied".
Concurrent Resumable Chunked Downloads with Integrity
- Given 3+ assets are eligible, When the orchestrator runs, Then it downloads up to 3 assets concurrently and queues the remainder. - Given the server supports HTTP range requests, When downloading, Then the orchestrator requests 2–8 MB chunks and persists progress after each chunk boundary. - Given the app is killed or crashes mid-download, When it restarts, Then the orchestrator resumes within 30 seconds without re-downloading completed chunks and re-downloaded bytes are <= 1% of the asset size. - Given a manifest with SHA-256 for each asset, When a download completes, Then the computed hash must match; on mismatch, the orchestrator re-fetches only affected chunks up to 2 attempts; on repeated mismatch, it marks the asset failed with reason = "IntegrityFailed".
Quiet Hours Scheduling with Imminent Session Override
- Given quiet_hours_window is configured as 23:00–06:00 local, When current time is outside that window and next_session_start > 60 minutes away, Then the orchestrator defers downloads and schedules them in the next quiet window with deferral reason = "QuietHoursScheduled". - Given current time is outside quiet hours and next_session_start <= 60 minutes away, When the orchestrator evaluates work, Then it proceeds with downloads subject to network and battery constraints. - Given quiet hours are active and the device is idle on Wi‑Fi or charging, When the scheduled time arrives, Then the orchestrator executes queued downloads during the window. - Given quiet hours are updated remotely, When a new window is received, Then the orchestrator re-schedules existing deferred work within 60 seconds.
Background Execution and Remote Nudge
- Given an iOS device with Background App Refresh enabled, When a BGTask is scheduled, Then the orchestrator runs in background and either completes at least one asset within the allowed window or saves state and exits cleanly without user interaction. - Given an Android device, When a WorkManager job with network and charging constraints is enqueued, Then the job runs, survives process death, and resumes work on app restart without data loss. - Given the device comes online and a push notification payload {action:"preload", route_ids:[...]} is received, When platform constraints are met, Then the orchestrator starts within 60 seconds and acknowledges via telemetry event = "PushNudgeReceived" with the referenced route_ids. - Given push is received but constraints are not met, When evaluation occurs, Then the orchestrator defers and emits deferral reason reflecting the first unmet constraint.
Telemetry and Deferral Reasoning
- For every asset attempt, When work starts, progresses, and completes or fails, Then the orchestrator emits telemetry events [preload_start, preload_progress, preload_complete|preload_failed] including fields: device_id_hash, asset_id, asset_type, size_bytes, bytes_transferred, started_at, completed_at, outcome, retry_count, throughput_bps_avg. - Given any deferral, When it occurs, Then the orchestrator emits preload_deferred with reason ∈ {"NoNetwork","BatteryLow","BatteryCritical","CellularNotAllowed","CellularDataLimit","QuietHoursScheduled","OSBackgroundDenied","IntegrityFailed","UserOptOut"}. - Given network is unavailable for telemetry upload, When events are queued, Then they buffer on-device up to 1 MB and retry with exponential backoff (1m, 2m, 4m, ... capped at 30m with ±20% jitter); on buffer overflow, the oldest events are dropped and a telemetry_dropped_count metric is incremented. - Given throughput is reported, When validated against a controlled test transfer, Then the average throughput_bps_avg is within ±10% of the measured wall-clock transfer rate excluding paused durations.
Offline‑First Session Launch & Fallbacks
"As a patient, I want sessions to launch instantly from cached assets so that I can keep exercising even when offline."
Description

Ensure session startup prefers cached assets and launches with zero network dependency when preloads exist. On launch, the app validates asset availability and versions locally, then proceeds instantly. If a required asset is missing, the app gracefully degrades: surface a clear, actionable message, use minimal offline guidance (e.g., text cues and last‑known instructions), and queue any non‑critical downloads for later. Session telemetry and rep/form data buffer locally and sync when connectivity returns, preserving clinician dashboards and adherence metrics.

Acceptance Criteria
Instant Offline Launch with Cached Assets
Given the device is in Airplane Mode and all assets for the selected session (exercise programs, cue packs, CV models) are cached and valid When the user taps "Start Session" Then the session loads entirely from local storage with no network requests initiated before the first cue And the first actionable cue or video frame is presented within 2 seconds on a mid-range device And no blocking spinners or retry prompts are shown during launch
Zero-Network Startup Path Selection
Given network connectivity is available but all required session assets are already cached and valid When the user taps "Start Session" Then startup selects the offline path and does not wait on any network request prior to first cue And any health checks or version pings occur only after session start and are non-blocking And telemetry indicates 0 blocking network calls before first cue
Local Version and Integrity Validation at Launch
Given a local manifest of required assets (IDs, versions, checksums) is present When the user launches a session (online or offline) Then the app validates availability, version, and checksum of each required asset using only local data prior to starting And if all validations pass, the session starts immediately without network calls And if any checksum mismatch is detected, the asset is treated as missing and the degradation path is initiated
Graceful Degradation When Required Asset Is Missing
Given at least one required asset (e.g., a CV model or cue pack) is missing or corrupt and no network is available When the user starts the session Then the app displays a clear, actionable banner naming the missing asset and stating minimal guidance will be used, with a single "OK" acknowledgment And the session proceeds using text cues and last-known instructions; dependent features are disabled and labeled "Unavailable Offline" And the first minimal cue appears within 3 seconds of tap And a recoverable task to fetch the missing asset is queued for later with reason "missing_at_launch"
Queue Non-Critical Downloads for Later
Given optional or non-critical assets (e.g., high-res media, tutorials) are not cached and connectivity is unavailable or poor When the session is started Then the app enqueues background download jobs with metadata (asset type, version, size, priority, retry count) And the session proceeds without waiting for these downloads And queued jobs persist across app restarts and retry with exponential backoff until success or 24 hours elapsed And upon connectivity restoration, queued jobs begin within 10 seconds and do not block the UI
Offline Telemetry Buffering and Reliable Sync
Given telemetry, rep counts, and form error events are generated while offline during a session up to 2 hours When connectivity is restored Then buffered events are uploaded in chronological order within 60 seconds of detection And server acknowledgments are used to ensure idempotency; no duplicate records appear after retries And local buffers are cleared only after confirmed receipt; otherwise they persist for retry And clinician dashboards reflect the synced data within 5 minutes of successful upload
Secure Storage & Integrity Verification
"As a clinic privacy officer, I want preloaded assets to be encrypted and integrity‑checked so that patient data remains protected and models are trustworthy."
Description

Store all preloaded assets in sandboxed, encrypted storage (Keychain/Keystore‑backed keys), and verify each file against signed manifest checksums before first use. Corrupted or tampered assets are quarantined and redownloaded; failed verifications trigger rollback to the last valid bundle. Implement secure deletion on eviction, prevent inclusion of PII within cached assets, and log verification outcomes for compliance audits. Access controls ensure only the MoveMate app can read assets, meeting HIPAA and platform security best practices.

Acceptance Criteria
Encrypt-at-Rest Storage and App-Only Access
Given MoveMate preloads assets to device storage When assets are written to disk Then files are encrypted at rest using platform-provided encryption with keys stored in Keychain/Keystore And file protection level is set to the strongest supported by the OS And file paths are within the app sandbox and not world-readable Given the MoveMate app process is not running and another app attempts to read cached files When access is attempted via standard file APIs Then access is denied and file contents are not readable in plaintext Given MoveMate is uninstalled When the cache directory is examined Then no cached asset can be decrypted because associated encryption keys are removed with the app
Signed Manifest and Checksum Verification Before First Use
Given a bundle manifest signed by MoveMate and a set of downloaded assets When the app prepares to use an asset for the first time Then the manifest signature is validated against a pinned public key And the asset's checksum matches the value in the signed manifest And the asset is marked Verified before being made available to any session Given signature validation fails or the checksum does not match When verification runs Then the asset is not used And the asset is flagged for quarantine handling
Quarantine and Automatic Redownload on Integrity Failure
Given an asset fails signature or checksum verification When verification completes Then the asset is moved to a quarantine location inaccessible to runtime sessions And an automatic redownload is initiated for only the affected assets Given redownload completes When verification re-runs on the new asset Then the asset is promoted to Verified on success Or remains quarantined on failure And user sessions continue without loading quarantined assets
Rollback to Last Valid Bundle on Verification Failure
Given a new bundle update is downloaded and at least one asset fails verification When the app determines bundle integrity status Then the bundle update is rejected And the last known good bundle is restored as the active bundle And upcoming sessions use the last known good assets without interruption Given rollback is performed When audit records are reviewed Then the rollback event includes bundle IDs, versions, timestamps, failed asset IDs, and actions taken
Secure Deletion on Eviction
Given cache eviction is triggered by policy or user action When an asset is evicted Then the asset's encryption key/material is destroyed or rotated so the file becomes irrecoverable And the file bytes are deleted from the filesystem And the asset no longer appears in app storage indices Given eviction has completed When a forensic scan of free space and cache directories is performed by the test harness Then no plaintext fragments of the evicted asset are recoverable And an audit log entry records the eviction without including PII/PHI
PII/PHI Exclusion in Cached Assets
Given the preload pipeline prepares assets (exercise programs, cue packs, CV models) When assets and their metadata are packaged Then only whitelisted non-PII content types and fields are included And no patient identifiers (name, MRN, DOB, email, phone, address) or free-text notes are present Given a developer attempts to add a field labeled or mapped as PII/PHI to an asset or its metadata When the packaging validator runs Then the build fails with a descriptive error And the asset is not published to the cache Given cached assets are scanned during QA When automated PII detection rules execute Then zero PII/PHI matches are reported
Compliance Audit Logging of Verification Outcomes
Given asset verification (pass/fail), quarantine, redownload, rollback, and eviction events occur When the logging subsystem records outcomes Then each log entry includes timestamp (UTC), device/app anonymized IDs, bundle/version, asset ID, algorithm (e.g., SHA-256), result, and action taken And logs contain no PII/PHI And logs are stored in append-only, tamper-evident form and are exportable for audits Given an auditor requests verification logs for a date range When an export is generated Then the export contains all relevant events with required fields and integrity markers And the export operation is itself logged
Storage Management & Eviction Policy
"As a patient, I want the app to manage storage and free space automatically so that preloads don’t fill my device or block new sessions."
Description

Introduce a storage quota system with user‑visible meters and automatic eviction to prevent device space exhaustion. The client maintains per‑patient caches, keeps the next N sessions (configurable), and evicts least‑recently‑used or expired assets after session completion. Large CV models support tiered variants so smaller models can be retained on constrained devices. Preload attempts run a preflight disk‑space check, prompt users when space is insufficient, and offer options to remove old sessions or switch to a smaller footprint. Policies are tunable via remote config.

Acceptance Criteria
Quota Meter Accuracy & Updates
Given a user opens the Storage screen for a specific patient cache, When the meter renders, Then it displays used and available storage in MB/GB with a percentage rounded to the nearest 1% and values match system-reported usage within ±1%. Given a preload completes and adds assets totaling S bytes (excluding already-cached assets), When the user views the Storage screen, Then the used storage increases by S±1% and the meter updates within 2 seconds without app restart. Given an eviction deletes assets totaling E bytes (net of shared assets), When the user views the Storage screen, Then the used storage decreases by E±1% and the meter updates within 2 seconds. Given used storage exceeds 90% of the per-patient quota, When the meter renders, Then it shows a warning state and displays a “Manage storage” call-to-action.
Preflight Disk-Space Validation
Given a user or clinician initiates a preload, When the preflight check runs, Then it estimates required bytes for uncached assets and compares against device free space minus a safety buffer B from remote config. Given required bytes > (free space − B), When the preload is initiated, Then the download does not start and an insufficient-space prompt appears within 1 second offering: Remove old sessions, Switch to smaller model (if available), or Cancel. Given required bytes ≤ (free space − B), When the preload is initiated, Then the download starts without showing the insufficient-space prompt. Given a preflight result is older than 5 minutes, When a new preload is initiated, Then the required-bytes estimate is recomputed before any download begins.
Post-Session Eviction Policy
Given a session completes for a patient, When eviction runs, Then the cache retains assets required for the next N sessions for that patient (N from remote config) and evicts expired assets first, otherwise least-recently-used assets within that patient’s cache. Given assets are referenced by multiple retained sessions or patients, When eviction runs, Then those shared assets are not evicted until no retained session references them. Given eviction runs after session completion, Then no assets required by the next N sessions are removed and the operation finishes without errors. Given eviction completes, Then the app reports the net freed bytes F and device free space increases by F±1% as reported by the OS within 5 seconds.
Tiered CV Model Fallback on Constrained Devices
Given the preflight detects that downloading the default CV model variant would exceed (free space − safety buffer B), When a smaller variant exists, Then the app selects the smallest variant that fits within (free space − B) and proceeds without error. Given the insufficient-space prompt is shown and a smaller model variant exists, When the user selects “Switch to smaller model,” Then the app downloads and activates the selected smaller variant and the required bytes for the preload decrease accordingly to fit within (free space − B). Given no smaller variant can fit within (free space − B), When the user attempts to switch, Then the app informs the user that switching cannot free enough space and returns to the removal options without starting a download.
Remote-Configurable Storage Policies
Given remote config provides values for N (sessions to retain), safety buffer B, and eviction strategy parameters, When a device fetches new config successfully, Then the new values apply to subsequent preflights and evictions within 15 minutes or on next app launch, whichever occurs first. Given a remote config fetch fails or returns invalid values (e.g., N < 1 or non-numeric B), When the app evaluates policies, Then it ignores the invalid/failed values, continues using the last known good configuration (or built-in safe defaults if none), and records a non-blocking error event; preloads are not blocked. Given new config values are applied, When a preflight or eviction runs, Then logs include the config version and parameters used for traceability.
User-Guided Removal Flow
Given the insufficient-space prompt is shown, When the user selects “Remove old sessions,” Then the app displays a list of cached sessions with per-session sizes, sorted by oldest first, excluding the current session and the next N sessions. Given the user selects one or more sessions totaling R bytes (net of shared assets) and confirms deletion, When removal completes, Then at least R bytes are freed (±1%) and the storage meter updates within 2 seconds. Given removal frees enough space such that required bytes ≤ (free space − B), When removal completes, Then the preload resumes automatically without requiring another user action; otherwise the insufficient-space prompt is shown again.

EcoVision Engine

Battery-aware computer vision that adapts frame rate, resolution, and model precision in real time while offline. Preserves rep-count accuracy while extending session time on low power, so rural visits and basement gyms don’t end early. Users get more complete sets with fewer charge breaks.

Requirements

Battery Telemetry & Budgeting
"As a patient exercising at home, I want the app to monitor my battery and set a smart power budget so that my session doesn’t end early from unexpected drain."
Description

Continuously ingest system battery metrics (level, charging state, temperature, OS power mode, recent discharge rate) and derive a real-time power budget exposed via an internal API to the vision stack. The module normalizes readings across iOS and Android (CoreMotion/ProcessInfo, BatteryManager) and smooths short-term volatility to prevent oscillations. It publishes budget updates (e.g., conservative, balanced, aggressive) at 1–2 Hz and supports event-based triggers at thresholds (20%, 10%, thermal warnings). Integration points include the session orchestrator (to start in the right mode), the vision pipeline (to request updated budgets), and the UI layer (to surface low-power states). Expected outcome is predictable energy usage and longer uninterrupted sessions without manual user intervention.

Acceptance Criteria
Cross-Platform Battery Telemetry Normalization
Given equivalent raw battery metrics on iOS and Android test harnesses, When the telemetry module normalizes and publishes a sample, Then the payload conforms to schema {batteryLevelPercent:int 0–100, chargingState:enum[charging,discharging,full,unknown], temperatureCelsius:number, osPowerMode:enum[normal,lowPower], dischargeRatePercentPerHour:number, timestamp:ISO8601Z}. Given iOS and Android inputs set to level=73%, charging=discharging, temp=34.7C, osPowerMode=lowPower, dischargeRate=5.0%/h, When normalized, Then both platforms publish matching values within tolerances (batteryLevel ±1 percentage point, temperature ±0.2 C, dischargeRate ±0.2 %/h) and identical enums. Given a platform does not expose a metric, When published, Then the field is present with value null (not omitted) and the schema remains unchanged.
Real-Time Budget Publication at 1–2 Hz
Given an active capture session lasting 120 s, When observing budget update events, Then between 120 and 240 updates are emitted and no inter-event interval exceeds 1500 ms. Given any budget update, When inspected, Then it includes budgetMode in {conservative, balanced, aggressive} and a timestamp that is ≥ the latest telemetry timestamp used for the decision. Given the module is idle (no active session), When observing, Then updates are not emitted more frequently than once every 2 s.
Battery and Thermal Threshold Event Triggers
Given battery level crosses downward through 20% or 10%, When the crossing occurs, Then the module emits a battery_threshold_crossed event for the respective threshold within 200 ms and switches budgetMode to conservative within 300 ms if not already. Given the OS issues a thermal warning, When received, Then the module emits a thermal_warning event within 200 ms and sets budgetMode to conservative until the warning clears. Given jitter around a threshold (±1 percentage point), When oscillating near 20% or 10%, Then the same threshold event is not re-emitted more than once within 60 s without a clearing state change.
Anti-Oscillation Smoothing of Budget Modes
Given battery level oscillates between 19% and 21% at 2 Hz for 60 s, When observing budgetMode, Then the mode changes no more than once during that period. Given transient discharge-rate spikes shorter than 5 s, When spikes occur, Then budgetMode remains unchanged. Given sustained changes (≥3 s) that cross a threshold plus 2 percentage points hysteresis, When observed, Then budgetMode updates within 300 ms of the sustained condition.
Vision Stack Budget API Latency and Delivery
Given the vision stack calls getBudget() 1000 times over 60 s on a mid-tier device, When measuring, Then p99 response time is ≤50 ms and mean is ≤10 ms. Given a subscription to budget updates is active, When the module publishes an update, Then the vision stack receives the event within 200 ms (p95) and never later than 500 ms. Given a 10-minute active session, When monitoring delivery, Then ≤1% of budget updates are dropped or duplicated end-to-end.
Session Orchestrator Initial Mode Alignment
Given a sessionStart is initiated, When the orchestrator starts, Then it queries the telemetry module for the current budget and sets vision pipeline start parameters to match budgetMode before camera activation, within 300 ms of sessionStart. Given the first budget update after start arrives, When compared to the start parameters, Then the parameters already match or are adjusted within 200 ms to match the new budgetMode. Given no telemetry is available at start, When starting the session, Then the orchestrator defaults to balanced, logs a warning, and switches to the first received budgetMode within 200 ms of its arrival.
Adaptive Vision Pipeline Controller
"As a patient, I want MoveMate to automatically tune frame rate, resolution, and model precision so that I get accurate rep counts while saving battery."
Description

Dynamically adjusts camera frame rate, input resolution, model precision (FP16/INT8), and model selection based on the current power budget and motion context while maintaining target accuracy. Implements guardrails for minimum temporal sampling, multi-rate inference (e.g., sample every N frames), and adaptive ROI cropping to reduce pixels processed. Includes hysteresis and cooldown timers to avoid rapid toggling. Integrates with device camera APIs (AVCapture/CameraX), the model runner (Core ML/Metal/NNAPI/TFLite), and the rep counting/form analysis modules. Provides a policy interface to trade precision for energy while adhering to latency budgets for real-time feedback. Expected outcome is sustained rep-count fidelity with extended session time on low power.

Acceptance Criteria
Low-Power Offline Workout Session
Given device battery ≤ 20% and device is offline and not charging When a rep-tracking session starts Then the controller applies a low-power profile within 1 second adjusting frame rate, resolution, precision, and model selection according to policy And rep-count F1 ≥ 0.98 versus baseline high-power profile on the same clip And estimated time-to-empty at session start increases by ≥ 25% compared to fixed high-power profile over a 2-minute A/B test on the same device And end-to-end feedback latency ≤ 150 ms p50 and ≤ 250 ms p95
Multi-Rate Inference During Slow Reps
Given motion magnitude stays below the low-motion threshold for ≥ 2 seconds When energy-saving mode is active Then inference runs on every Nth frame where 2 ≤ N ≤ 5 while camera capture remains ≥ 15 fps And miss rate increase for rep detection is ≤ 2% absolute versus per-frame inference on the same clip And the system reverts to lower N or per-frame within 300 ms after motion exceeds threshold And no gaps between processed frames exceed 200 ms
Precision/Model Switching with Hysteresis
Given battery ≤ 15% or OS power-saver is enabled continuously for ≥ 10 seconds When policy allows precision trade-offs Then the controller switches from FP16 to INT8 or to a small model variant and does not switch back until battery ≥ 25% for ≥ 30 seconds And at most one precision/model change occurs per 30-second window And rep-count accuracy delta ≤ 2% absolute and form-flag precision/recall deltas ≤ 3% absolute versus baseline And per-inference latency ≤ 30 ms on target device class
Adaptive ROI Cropping Efficiency and Fidelity
Given the subject bounding box IoU stability ≥ 0.7 over the past 1 second When adaptive ROI cropping is enabled Then processed pixel count is reduced by ≥ 40% compared to full-frame processing And rep detection recall ≥ 98% of baseline and precision ≥ 97% of baseline on the same clip And ROI tracking lag ≤ 100 ms and limbs are not cropped out for more than 1 frame per 200 frames
Hysteresis and Cooldown Prevent Mode Flapping
Given power and motion signals fluctuate around thresholds When the controller evaluates adaptation decisions Then each dimension (frame rate, resolution, precision, model) switches no more than once per 10 seconds And a cooldown of ≥ 30 seconds is enforced after any precision or model change And decision logs include timestamp, reason code, pre/post settings, and measured metrics for every switch
Camera and Backend Integration Reliability
Given AVCapture (iOS) or CameraX (Android) is available When the controller requests a frame rate or resolution change Then the actual capture rate reaches the target within 2 seconds and remains within ±10% of target And after app background/foreground or camera stalls, the previous profile is restored within 1 second And runtime selects Core ML+Metal on iOS and TFLite+NNAPI on Android when available, with fallback to GPU/CPU if not, without crashes And rep-count metrics remain within specified accuracy and latency budgets after any change
Policy API Enforces Energy/Latency Trade-offs
Given a developer sets target latency budget ≤ 150 ms p95 and minimum rep-count F1 ≥ 0.98 via the policy API When a session starts or the policy is updated at runtime Then the controller selects a configuration that meets the latency and accuracy targets on calibration clips And invalid policy inputs are rejected with descriptive error codes without side effects And accepted policy updates take effect within 1 second and are persisted for the current session
Offline Policy Engine & Fallbacks
"As a rural patient with spotty internet, I want battery-saving adjustments to work fully offline so that my sessions continue smoothly anywhere."
Description

On-device policy engine maps battery budget, device capability, and motion intensity to pipeline settings without any network dependency. Ships with precomputed policy tables and a lightweight runtime bandit/heuristic to refine choices over the first minutes of a session. Provides deterministic fallbacks when sensors or APIs are unavailable (e.g., lock to safe 24 FPS with INT8 model). Persists last-known-good configuration per device and exercise type to speed future startups. Ensures policies are sandboxed, versioned, and remotely updatable via app release only (no cloud fetch) to preserve offline operation. Expected outcome is robust, offline adaptation that degrades gracefully on older or constrained devices.

Acceptance Criteria
Low-Battery Startup Policy Selection
Given the battery level is ≤ 20% and the device has no network connectivity When the user starts a rep-counting session for a supported exercise Then the policy engine selects an initial policy from the precomputed table within 300 ms And the selected policy enforces frame rate within ±1 FPS of the table target, resolution, and INT precision as specified And projected session remaining time increases by ≥ 20% versus the default high-precision policy on the same device and exercise And rep-count error is ≤ 5% over the first 30 reps compared to the baseline high-precision policy on the same device And 0 network requests are made during selection and initialization
Sensor/API Unavailable Deterministic Fallback
Given one or more required sensors/APIs are unavailable or return errors (e.g., gyro missing, camera API not reporting frame rate) When a session is starting or running Then the engine locks to 24 FPS, INT8 model, and ≤ 720p resolution within 500 ms And the fallback configuration remains stable for the remainder of the session or until sensors recover (no more than 1 additional configuration change) And rep-count error is ≤ 8% over 30 reps compared to the baseline high-precision policy under the same conditions And the event is locally logged to diagnostics and the session continues without crash
Persist Last-Known-Good Per Device and Exercise
Given a session completes with rep-count error ≤ 5% and no fatal errors When the session ends Then the active configuration is persisted as a last-known-good (LKG) keyed by device model, OS version, app version, and exercise type within 200 ms And on the next startup for the same device and exercise, the LKG loads within 200 ms and is applied as the initial policy And warm-up time to a stable policy is reduced by ≥ 30% versus a cold start without LKG And if the policy schema version changes, the LKG is migrated or safely ignored and default policies are used without crash
Runtime Bandit Warm-Up and Stabilization
Given a new session begins without an existing LKG When the runtime bandit/heuristic explores during the first 3 minutes Then it evaluates no more than 3 policy candidates and performs ≤ 2 configuration switches per minute And it converges to a stable policy within 3 minutes or 60 reps, whichever comes first And post-convergence, configuration changes occur at most once every 10 minutes unless battery drops by > 10 percentage points And the converged policy achieves ≥ 10% lower energy per counted rep versus the initial policy while keeping rep-count error ≤ 5%
Offline Boot With Precomputed Policy Tables
Given the device is offline (airplane mode or no connectivity) When the app cold-starts and the user begins an exercise session Then precomputed policy tables load from local storage within 500 ms And 0 network calls are attempted or queued during boot and initial policy selection And initial policy selection completes within 300 ms And the same inputs (device capability, battery, motion intensity) result in the same selected policy as in an online state
Policy Sandboxing and Version Control
Given the app bundle includes signed policy tables and schema metadata When the policy engine loads or updates policies Then only policies with valid signatures and matching schema versions are accepted And policies cannot be altered by external files, remote configuration, or runtime code injection And the active policy version and checksum are recorded in local diagnostics And on app upgrade, caches are migrated to the new version or invalidated without exceeding 300 ms added startup time or causing crashes
Graceful Degradation on Legacy/Constrained Devices
Given a device with CPU-only inference or sustained throughput < 2 TOPS When a session runs for 10 minutes Then the engine selects a low-compute mode (≤ 24 FPS, INT8 model, ≤ 720p) within 60 seconds And average per-frame latency is ≤ 60 ms and UI thread frame drops are < 5% during the session And device temperature rise is ≤ 8°C above the initial device temperature reading over 10 minutes And rep-count error is ≤ 10% versus the baseline high-precision policy; the session remains uninterrupted
Accuracy Guardrails & Self-Validation
"As a clinician, I want the app to maintain validated accuracy thresholds and auto-correct when confidence drops so that I can trust rep counts and form alerts."
Description

Defines quantitative accuracy thresholds for rep counting and form error detection under each adaptive mode (e.g., ±1 rep per set, <5% false positives). Implements runtime confidence scoring, temporal smoothing, and backoff strategies (temporarily boost frame rate or precision when confidence drops). Includes a built-in calibration routine using short standardized movements during warm-up to verify model integrity at current settings. Logs accuracy-related events for QA and flags sessions that exceed error budgets to clinician dashboards. Expected outcome is preserved clinical trust while operating in energy-saving modes.

Acceptance Criteria
Warm-Up Calibration Verifies Model Integrity at Current Settings
Given the device is offline and the session start triggers the warm-up calibration using 3 standardized movements When the user performs each movement for at least 5 seconds (min 75 frames captured per movement) Then for each movement, keypoint stability score >= 0.85 on >= 80% of frames And pose classification confidence >= 0.80 on >= 80% of frames And the calibration completes within 90 seconds total And the calibration result is Pass only if all movement thresholds are met; otherwise Fail And on Fail, the engine escalates one accuracy level (higher frame rate and/or model precision) and prompts re-calibration And calibration metrics (scores, mode, fps, resolution, battery) are logged with timestamp
Rep-Counting Accuracy Maintained Across Adaptive Modes
Given a labeled test pack of 10 common exercises with 3 sets each, executed under Low-Power, Balanced, and High-Accuracy modes When the engine processes the clips offline with temporal smoothing enabled Then per-set absolute rep error <= 1 rep in >= 95% of sets per mode And overall mean absolute error <= 0.4 reps per set per mode And double-count rate <= 0.5% of total reps per mode And no set exhibits undercount greater than 2 reps
Form-Error Detection Guardrails in Energy-Saving Modes
Given labeled form-error events across the same exercises and modes When the engine detects form errors offline in Low-Power and Balanced modes Then false positive rate <= 5% per session And false negative rate <= 15% per session And median detection latency from event onset <= 300 ms, 95th percentile <= 500 ms And all detected form-error events include confidence scores and are logged
Runtime Confidence Backoff Escalates and Recovers with Hysteresis
Given a rolling mean confidence computed over the last 30 frames for rep and form-error predictions When the rolling confidence drops below 0.65 for >= 30 consecutive frames Then the engine escalates accuracy (increase frame rate >= 50% and/or raise model precision one level) within 1.5 seconds And once rolling confidence is >= 0.75 for 20 consecutive seconds and battery permits, the engine de-escalates one level And mode changes respect a minimum dwell time of 10 seconds to prevent oscillation And each backoff/escalation event is logged with pre/post settings and reason
Temporal Smoothing Prevents Double-Counts Without Excess Latency
Given temporal smoothing and a rep refractory window are enabled When reps are detected during any exercise set Then no two reps for the same movement are counted within 300 ms And median rep detection latency relative to ground-truth peak is <= 150 ms, 95th percentile <= 300 ms And post-hoc rep corrections due to smoothing affect <= 0.5% of total reps per session And cumulative rep count is strictly monotonic within a set
Accuracy Event Logging and Clinician Flagging of Error Budgets
Given the session runs offline with event logging enabled When any accuracy guardrail breach or backoff event occurs Then an event is stored with timestamp, session ID, mode, fps, resolution, model ID, battery level, rolling confidence, thresholds, action taken, and outcome And a session is marked "Accuracy Review Required" if any of the following occur: calibration Fail; total time with confidence < 0.65 exceeds 5% of active capture; >= 3 backoff escalations; or post-hoc rep correction rate > 2% And flagged sessions sync to the clinician dashboard within 10 minutes of connectivity restoration or by session end if online And logs persist locally until successful sync or for up to 72 hours (whichever comes first) with a minimum storage budget of 5 MB
Device Capability Detection & Model Bundling
"As a patient with an older phone, I want the app to choose the best-supported model and settings so that I still get reliable tracking without draining my battery."
Description

Runtime probe determines hardware capabilities (CPU/GPU/NPU availability, Metal/NNAPI versions, thermal headroom) and selects the optimal model bundle (FP16/INT8 variants, pruned or full) and camera configuration per device class. Maintains a compatibility matrix and safe minimums for unsupported features. Downloads no models at runtime; all variants are packaged and selected locally to support offline use. Exposes a capabilities profile to the policy engine and pipeline controller. Expected outcome is consistent behavior across heterogeneous devices with best-available efficiency and accuracy.

Acceptance Criteria
Runtime Capability Probe Coverage and Performance
Given the app cold-starts on a device with no network, When the capability probe executes, Then it detects and records CPU architecture and core count, GPU presence/vendor, NPU/NNAPI/Metal availability and version, available RAM tier, thermal headroom state, supported camera resolutions and frame rates, and quantization support (FP16/INT8). And Then it emits a capabilities profile conforming to schema version 1.0 with all required fields populated or explicitly null when unavailable. And Then the probe completes without blocking the UI thread for more than 50 ms and finishes within 500 ms on the mid-tier reference device class.
Offline Local Model Bundle Selection and Load
Given multiple packaged model bundles (INT8, FP16, pruned, full) are included in the app, When selection runs with network disabled, Then the selector chooses the highest-precision bundle that meets detected compute, API, and thermal constraints. And Then the model loads from local storage with zero network requests (0 bytes sent/received), and load completes within 800 ms on the mid-tier reference device class. And If the preferred model fails to load, Then the system automatically falls back to the safe-minimum bundle and continues without crash.
Compatibility Matrix and Safe-Minimum Fallbacks
Given an unknown device or missing/unsupported capability, When selection runs, Then the compatibility matrix maps the device to a default class and applies safe minimums (CPU execution, lower-precision model if required, camera ≤720p at ≤30 fps). And Then the vision pipeline initializes successfully and maintains inference throughput ≥15 fps. And Then on the internal validation dataset, rep-count accuracy degrades by no more than 2 percentage points versus the baseline device class.
Camera Configuration Selection and Fallback
Given the capability profile and device class, When a session starts, Then the camera is configured to a supported resolution and frame rate that meet the class performance budget. And If the first-choice camera configuration is rejected by the OS, Then the system selects the next lower tier automatically until success. And Then time-to-first-frame is ≤1.5 seconds on the mid-tier reference device class.
Capabilities Profile API Exposure and Versioning
Given the policy engine and pipeline controller request capabilities, When they call the CapabilitiesProvider API, Then they receive an immutable profile object with stable keys (compute.*, api.*, thermal.*, camera.*, quantization.*, storage.*) including a schema version. And Then change notifications are emitted only when a watched attribute crosses a defined threshold (e.g., thermal headroom), and consumers receive the updated profile within 200 ms. And Then unknown keys return null without crash, and schema version mismatches are logged as warnings.
Deterministic Selection Across Sessions and Thermal Adaptation
Given identical device conditions across app restarts, When the selector runs, Then it chooses the same model bundle and camera configuration deterministically. And When thermal headroom drops below the defined threshold during a session, Then the selector switches to the next-lower-cost bundle within 500 ms without interrupting an active set (no crash, no camera reinitialization failure). And Then when thermal recovers past a hysteresis margin, the system may restore the higher-performance bundle with a minimum dwell time of ≥60 seconds to prevent oscillation.
Low-Power User Prompts & Session Continuity
"As a patient mid-session, I want clear low-power prompts and seamless pause/resume so that I can finish my sets without losing data."
Description

Surfaces timely, unobtrusive prompts when entering critical battery states (e.g., suggest enabling Low Power Mode, dim screen, or pausing between sets). Auto-saves session state, exercise position, and rep totals every 10 seconds and on power-state changes, enabling seamless resume after app kill or device shutdown. Provides a single-tap switch to “Eco Mode” with clear explanation of trade-offs. Integrates with accessibility settings for readability at low brightness. Expected outcome is fewer lost sessions and higher completion rates under low battery conditions.

Acceptance Criteria
Critical Battery Prompt Timing and Content
Given the device battery falls to or below 10% during an active tracking session When the OS broadcasts the battery state change Then a non-blocking top banner prompt appears within 1 second And the prompt includes actions: "Enable Low Power Mode", "Turn On Eco Mode", and "Dim Screen" (only shown if brightness > 30%) And the message is localized and <= 120 characters And the banner is shown at most once per threshold per session and can be snoozed for 10 minutes And rep-detection FPS does not drop by more than 5% while the banner is visible And the banner does not occlude the camera guidance overlay area
Auto-Save Interval and Power-Event Flush
Given an active session When 10 seconds of tracking elapse Then the app atomically persists state (exercise ID, set number, rep total, eco mode state, CV calibration data, timestamp) to durable local storage And no more than the last 10 seconds of progress are lost after an unexpected termination Given any power-state change (plug/unplug, battery threshold crossed, Low Power Mode toggled, screen turned off) When the event occurs Then an immediate save is flushed within 300 ms And the save completes even if the app is backgrounded within 1 second Given a storage write failure When a save is attempted Then the app retries up to 3 times with exponential backoff and logs a non-blocking warning indicator
Seamless Resume After Kill or Reboot
Given the app was tracking and the app was killed or the device rebooted When the user relaunches the app within 24 hours Then a "Resume session" banner appears within 2 seconds showing exercise name, set, reps, and timestamp And tapping Resume opens the same exercise and set with rep total equal to the last saved state (±0 reps) And computer-vision tracking restarts and begins counting within 2 seconds And no duplicate reps from pre-termination frames are recounted And if multiple recoverable sessions exist, the user is prompted to choose, defaulting to the most recent
One-Tap Eco Mode Toggle With Trade-offs Explanation
Given the user is on the active session screen When the user taps the Eco Mode toggle Then Eco Mode activates within 300 ms and shows an "Eco Mode On" indicator And on first activation per user, a one-time sheet explains trade-offs (reduced frame rate, lower resolution, model precision adjustments) with a "Don't show again" option And during a controlled 10-minute test on the same device, average tracking FPS decreases by ≥25% and device power draw decreases by ≥15% versus Standard Mode, while rep-count accuracy degrades by no more than 1 per 100 reps on a standard test set And tapping the toggle again returns to Standard Mode within 300 ms and removes the indicator
Accessibility at Low Brightness and Large Text
Given screen brightness is < 30% or system High Contrast/Large Text is enabled When prompts and session UI are displayed Then text meets WCAG 2.1 AA contrast (≥4.5:1 normal text, ≥3:1 large text) And text respects Dynamic Type up to at least 120% scaling And interactive elements for critical actions have minimum 44x44 pt tap targets And prompts provide optional haptic and brief audio cues that honor system mute/vibrate settings
Unobtrusive Prompt Behavior During Active Reps
Given an active rep is being counted When a low-power prompt needs to be shown Then the prompt appears as a top banner that does not pause or reset rep counting And the banner does not cover the camera guidance overlay region And the banner auto-dismisses after 5 seconds, can be dismissed with a single tap, or swiped away And no more than one banner is displayed within any 2-minute window unless a new, lower battery threshold is crossed And the banner includes a "Pause after this set" action when a set is in progress
Diagnostics Telemetry & Privacy-Safe Analytics
"As the MoveMate product owner, I want privacy-safe telemetry on energy savings versus accuracy so that we can improve defaults and support more devices responsibly."
Description

Captures on-device diagnostics about policy decisions, energy savings (estimated mW), frame rates, dropped frames, model choice, and accuracy confidence—stored locally and uploaded only when on Wi‑Fi and with consent. Applies aggregation and differential privacy where applicable; excludes raw images or PII. Provides clinician/product dashboards with cohort-level insights to tune defaults and identify problematic device classes. Offers developer toggles for verbose logs in test builds. Expected outcome is continuous improvement of EcoVision defaults without compromising patient privacy.

Acceptance Criteria
On-Device Telemetry Capture Schema
- Given EcoVision Engine is running offline during a session, when a rep is detected, then the telemetry event records timestamp (ms), active model ID, frame rate (fps), resolution, estimated power (mW), dropped-frames since last event, rep-count confidence (0–1), and policy decision code. - Given a session ends, when logs are persisted, then events are written to encrypted local storage within 2 seconds and tagged with schemaVersion >= 1.0.0. - Given persistence fails, when a write attempt occurs, then the app retries up to 3 times with exponential backoff (100ms, 200ms, 400ms) and the session does not crash; a non-PII error code is logged.
Wi‑Fi and Consent-Gated Upload
- Given the user has not granted analytics consent, when any network is available, then no telemetry is uploaded and the queue remains local with status "opted_out". - Given the user has granted consent and device is on Wi‑Fi and battery > 15% and app is idle for >= 5 seconds, when an upload cycle starts, then up to 2 MB of telemetry is uploaded per batch. - Given the user revokes consent, when revocation is saved, then all queued telemetry is deleted within 10 seconds. - Given the device is on cellular, when an upload cycle starts, then no upload occurs unless buildType = "TEST" and developer override is enabled.
Privacy: No Raw Media or PII
- Given telemetry serialization occurs, then payload contains no raw images, video frames, audio, free-text notes, names, emails, phone numbers, or precise GPS; only device class, OS version, coarse region (if any), and anonymized IDs are allowed. - Given pre-upload validation runs, when any forbidden field is detected, then the batch is rejected, redacted, and not persisted for upload; in TEST builds a non-PII warning is shown. - Given data at rest, then it is encrypted using OS hardware-backed keystore/keychain with a unique per-installation key; keys are never uploaded.
Differential Privacy and Aggregation Thresholds
- Given metrics are prepared for upload, when cohort size < 20, then metrics are kept local until k >= 20. - Given metrics are prepared for upload and k >= 20, then noise is applied to enforce epsilon <= 1 per metric per 24 hours; the applied epsilon is recorded in metadata. - Given environment = TEST, when DP is disabled via developer toggle, then metrics are tagged "TEST" and excluded from clinician dashboards.
Cohort Insights Dashboard
- Given sufficient aggregated data exists, when a clinician views the dashboard, then they can filter by device class, OS major version, and session type, and see: average energy saved per session (mW·min), average frame rate (fps), dropped-frame rate (%), model selection distribution (%), and rep-count confidence distribution with 95% CI. - Given a device class exceeds dropped-frame rate > 5% over >= 100 sessions in the last 14 days, when the dashboard loads, then an alert appears and a CSV export of affected cohorts is available for download within 3 seconds.
Developer Verbose Logging (Test Builds Only)
- Given buildType = "TEST", when a developer enables Verbose Diagnostics, then per-frame policy decisions, battery delta estimates, and model switch reasons are logged with a rolling window of 15 minutes and a size cap of 10 MB; oldest entries are pruned first. - Given Verbose Diagnostics is enabled in TEST, when an upload cycle runs, then verbose logs are not included in telemetry payloads and are never sent to production endpoints. - Given a developer taps "Clear Logs", when confirmed, then verbose logs are deleted within 2 seconds.
Upload Reliability, Retry, and Retention
- Given an upload fails with HTTP 5xx or network error, when retry policy applies, then retries occur at 1, 2, 4, 8, and 16 minutes (max 5 attempts); on success the schedule resets. - Given all retries are exhausted, when 7 days elapse from batch creation, then the batch is purged automatically and a non-PII discard metric is incremented. - Given an upload succeeds, when server 200 OK is received, then the local batch is deleted within 5 seconds.

Smart Snippets

Stores only the moments that matter: compact 3–5 second clips around form flags and milestone reps, plus compressed summaries for clean sets. Automatic space management prunes non-critical footage first and alerts before storage runs low. Keeps evidence and coaching context without filling the device.

Requirements

Event-Triggered Smart Clip Capture
"As a patient, I want the app to automatically save short clips around important moments so that I can review and fix my form without scrubbing through full videos."
Description

Continuously maintain a short rolling video buffer and automatically save 3–5 second clips when key events occur, including form error detections, milestone reps (e.g., first, last, every Nth), and clinician-configured triggers. Use adaptive pre-roll and post-roll based on exercise cadence to ensure full context around each event while preventing duplicate or overlapping clips by merging adjacent triggers. Tag each snippet with metadata (exercise ID, rep number, error type, confidence, timestamps) for search and review. Implement efficient on-device encoding to minimize CPU, battery, and thermal impact, with graceful degradation for lower-tier devices. Integrates with existing computer-vision event stream and patient session timeline.

Acceptance Criteria
Adaptive Pre/Post-Roll Smart Clip Creation
Given a rolling video buffer of 6 seconds is active And the average rep duration is computed over the last 5 detected reps When an eligible event is emitted by the CV event stream Then a clip is saved with total duration between 3.0s and 5.0s using adaptive windows: And pre_roll = clamp(0.8s, 0.35 × mean_rep_duration, 2.0s) And post_roll is set so that (pre_roll + post_roll) ∈ [3.0s, 5.0s] And the event timestamp lies between 30% and 70% of the clip timeline And the clip is persisted to local storage within 1.0s of event receipt And failure to read sufficient buffer (e.g., app just started) results in no clip and a WARN log, not a crash
Milestone Rep Triggered Clips
Given clinician-configured milestone rules {first_rep: on, last_rep: on, every_nth: N} When a set begins and reps are detected Then a clip is saved for rep 1 if first_rep is on And a clip is saved for each rep r where r mod N = 0 when every_nth is set And a clip is saved for the final rep within 2.0s of set_end when last_rep is on And no duplicate clip is created when multiple milestone rules target the same rep; the single clip is tagged with all applicable milestone reasons And disabling a milestone rule prevents creation of clips for that rule within 100 ms of config change And aborted sets (no set_end) do not produce a last_rep clip
Form Error Clip Capture with Metadata
Given an error event {error_type, confidence, event_ts} from the CV stream And a confidence threshold T = 0.70 (configurable 0.50–0.90) When confidence ≥ T Then a clip is saved using adaptive pre/post-roll rules And the clip contains the error frame(s) with the primary error event_ts positioned within ±200 ms of the clip’s center And the clip is tagged with {exercise_id, session_id, rep_number (if available), error_type[], confidence[], start_ts, end_ts, event_ts[]} And confidence values are stored with two-decimal precision in [0.00, 1.00] And multiple errors occurring within 500 ms are included as a single clip with multiple error tags And error events with confidence < T do not produce clips
Adjacent Trigger Merge and De-duplication
Given two or more trigger windows derived from events’ pre/post-roll intervals When windows overlap or the gap between them ≤ 500 ms and the combined span ≤ 5.0s Then a single merged clip is produced spanning min(start) to max(end) And the merged clip aggregates all trigger reasons and per-event timestamps And when the combined span would exceed 5.0s, clips are not merged and are saved separately And no more than one clip is saved per unique event_id (idempotent on replay) And consecutive saved clips are separated by at least 200 ms of non-recorded timeline
Efficient On-Device Encoding and Graceful Degradation
Given the encoder is active during a 15-minute test session When running on a mid-tier reference device Then Smart Clip processing adds ≤ 20% average CPU and ≤ 40% peak CPU for ≤ 1% of samples And additional battery drain attributable to Smart Clips is ≤ 5% over 15 minutes versus CV-only baseline And no thermal throttling notification is emitted by the OS When sustained CPU > 60% for 3 seconds on lower-tier devices Then the system auto-reduces capture to ≤ 480p and/or ≤ 15 fps And event detection recall remains ≥ 95% of the baseline profile on the evaluation set And average clip write latency (event to file durable) ≤ 1.0s And average 5.0s clip size at 720p30 H.264 ≤ 1.5 MB
Snippet Metadata and Timeline Integration
Given a clip is saved Then metadata schema validation passes with required fields: {clip_id, session_id, exercise_id, trigger_reasons[], rep_numbers[], error_types[], confidences[], start_ts, end_ts, event_ts[]} And timestamps are UTC epoch ms with start_ts < end_ts and clip duration in [3.0s, 5.0s] And the clip is added to the session timeline at the earliest event_ts within 2.0s of file durability And the clip is retrievable via search by exercise_id, rep_number, and error_type filters, returning correct results within 300 ms for ≤ 1000 clips And tapping a timeline entry opens the clip at the nearest event_ts (seek error ≤ 150 ms)
Clean Set Summary Generation
"As a clinician, I want concise summary clips for clean sets so that I can confirm adherence quickly without reviewing long recordings."
Description

When a set completes with no form flags, generate a compact summary clip instead of storing the full set. Create a time-lapse or keyframe-based montage with lightweight overlays showing total reps, average tempo, and range-of-motion scores. Target small file sizes via bitrate caps and resolution downscaling while preserving legibility of overlays. Provide per-exercise and per-clinic toggles to enable/disable summaries and configure the compression profile. Store summaries with consistent metadata and link them to the session timeline for quick verification of adherence.

Acceptance Criteria
Auto Summary on Clean Set Completion
Given a recorded set has zero form flags When the set is marked complete Then the system generates a clean-set summary clip and does not persist the full-length set video for that set And Then the summary clip duration is between 3 and 5 seconds when using time-lapse, or the montage contains 4 to 8 keyframes when using keyframe mode And Then generation of the summary completes within 10 seconds of set completion on a median device profile And Then the summary clip is tagged type="clean_set_summary"
Overlay Data Accuracy and Legibility
Given overlay metrics are computed for the set When the summary clip is generated Then the overlay includes total reps, average tempo (s/rep), and range-of-motion score And Then total reps equals the system-counted reps for the set And Then average tempo equals the arithmetic mean of rep durations within ±0.1 seconds And Then the ROM score equals the computed ROM metric for the exercise within ±1 point And Then overlay text height is ≥ 20 pixels at the output resolution and contrast ratio is ≥ 4.5:1 And Then overlays remain within the lower-third safe area and do not cover more than 15% of frame height
Compression Profile Enforcement
Given a compression profile is selected at the effective scope (exercise override or clinic default) When generating the summary clip Then the output resolution matches the profile setting and the average bitrate is ≤ the profile's bitrate cap And Then the file size is ≤ the profile's max_size_bytes And Then any downscaling preserves the Overlay Data Legibility criteria And Then the container and codec conform to the profile definition
Per-Exercise and Per-Clinic Toggle Behavior
Given clinic-level summary settings and exercise-level overrides exist When both are configured Then the exercise-level setting overrides the clinic default for that exercise And Given a clean set completes and the effective setting is Enabled When the set completes Then a summary clip is generated using the effective compression profile And Given a clean set completes and the effective setting is Disabled When the set completes Then no summary clip is generated and the system follows the clinic-configured fallback storage policy for clean sets And Then setting changes persist and apply to subsequent sets without app restart And Then the effective setting and profile are visible in the exercise settings UI and retrievable via API
Consistent Metadata and Timeline Linking
Given a summary clip is generated When it is stored Then metadata includes session_id, exercise_id, set_index, start_time, end_time, rep_count, avg_tempo_s, rom_score, generation_method (timelapse|keyframe), compression_profile_id, size_bytes, checksum, and created_at And Then the clip is linked to the session timeline at the correct set_index with a "clean set" badge And Then tapping the timeline entry opens the summary and autoplays from 0:00 And Then adherence views mark the set as completed based on presence of the linked summary
No Summary Generation for Flagged Sets
Given a set has one or more form flags When the set is marked complete Then no clean-set summary is generated And Then snippets are generated around each flagged event per Smart Snippets policy And Then the session timeline shows flagged markers for the set and no clean-set badge
Timeline Retrieval and Playback Performance
Given a session contains at least one clean-set summary When a clinician opens the session timeline Then the timeline renders within 1 second on a median device profile and displays thumbnails for clean-set summaries And When the clinician taps a clean-set summary entry Then playback starts within 2 seconds and maintains smooth playback with dropped frames ≤ 2% on a median device And Then local playback is available offline for summaries stored on-device
On-Device Storage Quota & Auto-Pruning
"As a patient, I want the app to automatically remove non-essential clips first so that Smart Snippets never fill up my device."
Description

Allocate a configurable storage quota for Smart Snippets and enforce an eviction policy that prioritizes retention of critical evidence. Define tiers (flags > milestone reps > clean summaries) and prune lower-priority, oldest items first using LRU rules while protecting favorited/locked clips. Run background cleanup tasks post-session and when thresholds are crossed. Respect system storage signals and pause non-essential writes when the device is critically low. Expose a setting to set quota by absolute size or percentage of free space and synchronize retention decisions with server-side records after successful upload.

Acceptance Criteria
Quota Configuration: Absolute Size and Percentage Modes
Given the user selects "Absolute size" and enters 2 GB, When they save, Then Smart Snippets usage is capped at 2 GB and enforcement begins immediately. Given the user selects "Percentage of free space" and enters 20% on a device with 10 GB free, When they save, Then the cap is 2 GB and enforcement matches that size. Given the user reopens settings after an app restart, When they view the quota, Then the previously selected mode and value persist. Given the user enters a value outside the allowed range, When they attempt to save, Then the app blocks save and displays a validation error explaining the allowed range. Given a quota change reduces the cap below current usage, When the user saves, Then background pruning starts within 10 seconds and continues until total usage is less than or equal to the new cap.
Tiered Eviction with LRU Within Tiers
Given total Smart Snippets usage exceeds the quota, When pruning runs, Then clean summaries are deleted first in least-recently-used order until usage is less than or equal to the quota. Given usage still exceeds the quota after deleting all eligible clean summaries, When pruning continues, Then milestone reps are deleted next in least-recently-used order until usage is less than or equal to the quota. Given usage still exceeds the quota with only flagged clips remaining, When pruning evaluates next steps, Then the app pauses creation of new non-essential snippets and surfaces a storage alert; flagged clips are retained. Given two items in the same tier with different last-access times, When pruning chooses between them, Then the older last-accessed item is deleted first.
Protected Clips: Favorited/Locked Are Not Pruned
Given a clip is marked as favorited or locked, When pruning runs, Then that clip is excluded from deletion regardless of tier and LRU. Given the quota cannot be met without removing at least one protected clip, When this condition is detected, Then the app pauses non-essential writes and prompts the user to free space or adjust the quota; no protected clip is deleted automatically. Given a user removes protection from a clip, When the next pruning cycle runs, Then the clip becomes eligible for deletion according to its tier and LRU position.
Background Cleanup Triggers and Completion
Given a tracking session ends, When post-session processing begins, Then a cleanup task starts within 10 seconds and prunes until usage is less than or equal to the quota. Given storage usage crosses the pruning threshold during a session, When the threshold event occurs, Then a background cleanup is scheduled and runs without requiring an app restart. Given the device is offline, When cleanup runs, Then local pruning proceeds and any server sync updates are queued for later. Given a cleanup task is interrupted by OS constraints, When the app regains background execution time or is next launched, Then cleanup resumes until usage is less than or equal to the quota.
System Low-Storage Handling and User Alerts
Given the OS signals low storage, When the signal is received, Then the app pauses non-essential writes (e.g., clean summaries), continues capturing critical evidence (flags and minimal metadata), and displays a non-blocking low-storage alert. Given the OS signals critically low storage, When the signal is received, Then the app pauses all new snippet writes, retains existing clips, and displays a blocking banner with actions to manage storage or adjust quota. Given storage returns to safe levels, When the next storage check runs, Then normal write behavior resumes automatically and the alert is dismissed.
Server Synchronization of Retention Decisions
Given a clip has not been successfully uploaded, When pruning evaluates deletions, Then the clip is not deleted regardless of tier until upload succeeds. Given a clip is successfully uploaded, When a local pruning decision deletes it, Then the server is updated to reflect deletion/retention within 60 seconds and subsequent fetches do not return the deleted clip. Given a network outage during sync, When connectivity is restored, Then queued retention updates are retried until confirmed by the server without duplicating records. Given the same retention decision is applied multiple times, When sync runs, Then the server operations are idempotent and produce no inconsistent state.
Low-Storage Alerts & Guided Cleanup
"As a patient, I want clear warnings and a quick way to free space so that recording and syncing continue without interruption."
Description

Detect approaching storage limits and provide proactive, non-intrusive alerts at configurable thresholds (e.g., 75%, 90% of quota). Offer a guided cleanup screen that surfaces the largest and lowest-priority items, estimates recoverable space, and supports one-tap deletion with undo. Include a pre-session check that warns when upcoming capture may exceed available space and suggests remediation. Ensure alerts do not interrupt active capture and that cleanup operations run safely in the background with progress feedback.

Acceptance Criteria
Configurable Low-Storage Threshold Alerts
Given the app storage quota has alert thresholds configured at 75% and 90% When used storage crosses 75% for the first time in a 24-hour period Then display a non-intrusive in-app banner within 1 second showing the exact percent used and a "Review Cleanup" action And do not display a modal dialog And mark the 75% alert as delivered until used storage drops below 70% or 24 hours elapse, whichever occurs first When used storage crosses 90% Then display a persistent, non-blocking banner and badge within 1 second, with "Review Cleanup" and "Snooze 6h" actions And mark the 90% alert as delivered until used storage drops below 85% or 24 hours elapse, whichever occurs first
Alerts Never Interrupt Active Capture
Given an exercise capture session is active When any low-storage threshold is crossed Then do not pause, stop, or reconfigure the camera session And do not play sounds or haptics And show only a subtle indicator outside the video preview area without obscuring feedback overlays And ensure the app records continuous video snippets and rep counting events with no gaps attributable to the alert
Guided Cleanup Surfaces Lowest-Priority, Largest Items First
Given the user opens the Guided Cleanup screen Then list items sorted by Impact (largest size first) and Priority (lowest priority first) And label each item with size, age, and priority (Critical, Important, Low) where Low includes clean-set summaries and unflagged content And display an estimated Recoverable Space total that updates as items are selected And provide Select Top Space Savers to auto-select items totaling at least 500 MB or the top 10 items, whichever is larger, favoring Low-priority items And preserve Critical items (form flags and milestones) unselected by default with a confirmation if the user explicitly selects them
Safe Background Cleanup With Progress and Undo
Given the user confirms deletion When cleanup starts Then perform deletions in the background even if the user navigates away from the screen And show a progress indicator with files deleted count and MB freed, updating at least every 1 second And allow undo for 10 seconds after completion of each batch; undo fully restores metadata and files And if any item fails to delete, continue with remaining items and report the count of failures with a retry option And never delete items not selected for deletion
Pre-Session Storage Check and Remediation Suggestions
Given the user taps Start Capture When projected storage required for the next session (based on current quality and average session length setting) plus a 10% quota safety buffer exceeds available app storage Then present a pre-session warning sheet that does not start capture yet And show at least three remediation actions: Open Guided Cleanup, Lower Video Quality, Adjust Snippet Retention And allow the user to proceed anyway via Proceed Anyway with an explicit confirmation if risk remains And if the user takes a remediation action, recompute and dismiss the warning automatically once sufficient space is available
Alert Rate Limiting, Snooze, and Audit
Given a low-storage alert has been displayed When the user taps Snooze 6h Then suppress the same-threshold alert for 6 hours across app restarts unless storage crosses the next higher threshold When storage drops below the threshold by at least 5 percentage points and later crosses it again Then re-enable the alert And log each alert delivery and dismissal with timestamp, threshold, and storage percent for diagnostics
Privacy-First Redaction & Consent Controls
"As a patient, I want faces and backgrounds protected and control over sharing so that my privacy is preserved when I send clips to my clinician."
Description

Apply on-device privacy measures to all snippets, including optional face blurring, background masking/cropping to the exercise region, and audio-off by default. Encrypt clips at rest and in transit and restrict default sharing to the assigned clinician unless explicit consent is granted. Provide per-patient and per-clinic settings for capture, retention duration, and sharing permissions, with visible indicators showing when privacy filters are active. Maintain audit metadata for when a clip was captured, processed, shared, or deleted. Ensure all privacy controls integrate seamlessly with capture, storage, and upload pipelines.

Acceptance Criteria
On-Device Redaction Defaults During Snippet Capture
- Given a Smart Snippet is captured, When processing completes, Then the audio track is absent by default unless the patient/clinic setting "Audio On" is true. - Given face blurring is enabled, When one or more faces are detected, Then all detected faces are blurred at an intensity ≥ Gaussian sigma 10 or equivalent pixelation before any local storage. - Given background masking/cropping is enabled, When the exercise region is identified, Then pixels outside the region are masked with a solid color or the frame is cropped prior to local storage. - Given privacy settings are applied on-device, When the device is offline, Then all redactions occur locally with no data sent to external services. - Given redaction settings are Off for a patient, When capture starts, Then the UI indicates filters are Off and no redaction is applied, and this state is reflected in the clip metadata.
Encryption at Rest and In Transit for Snippets
- Given a snippet is written to disk, Then it is encrypted at rest using AES-256-GCM with a key stored in the OS-secure keystore; direct file access without app auth yields unreadable content. - Given a snippet upload occurs, Then transport uses TLS 1.2+ with certificate pinning; a simulated MITM with an unpinned certificate causes the connection to fail. - Given application re-install or key rotation, When new snippets are saved, Then they are encrypted with the new key while previously stored snippets remain decryptable via wrapped key material. - Given the device is locked, When background uploads run, Then keys are only accessible if the OS permits keystore use under lock; otherwise uploads are deferred without plaintext exposure.
Consent-Gated Sharing and Revocation Enforcement
- Given no explicit consent beyond the assigned clinician, When the user attempts to share a snippet to any other recipient, Then the share action is blocked and a message explains consent requirements. - Given explicit consent is granted with defined recipients and scope/duration, When uploads occur, Then only the consented recipients receive access within the consent window. - Given consent is revoked, When subsequent uploads or share retries occur, Then access for revoked recipients is denied and queued shares to them are purged. - Given clinician assignment changes, When default sharing targets are evaluated, Then they update to the new assignment and an audit entry records the change.
Hierarchical Privacy Settings and Retention Enforcement
- Given clinic defaults and patient-specific overrides, When conflicting settings exist, Then patient overrides take precedence for that patient only. - Given retention is set to N days, When N days elapse since capture and the clip is not under legal hold, Then the clip is automatically deleted and the deletion is audit-logged. - Given settings are changed at time T, When captures start after T, Then the new settings are applied; captures started before T continue with prior settings. - Given sharing permission is toggled Off, When the pipeline triggers an upload, Then the upload is skipped and an audit entry records the skip reason.
Visible Privacy Indicators During Recording and Playback
- Given recording starts with any privacy filter active, Then on-screen indicators (e.g., muted mic, face-blur icon) appear within 500 ms and remain visible during capture. - Given playback of a stored snippet, Then the UI displays a banner or badge listing which privacy filters were applied, sourced from clip metadata. - Given a required filter fails initialization, Then a warning is shown and capture is blocked until the issue is resolved; no unredacted clip is saved. - Given device accessibility settings are enabled, Then all indicators have text labels and meet WCAG 2.1 AA contrast ratios.
Complete Audit Trail for Clip Lifecycle Events
- Given any lifecycle event (captured, processed, shared, deleted), Then an audit record is appended within 1 second containing: UTC ISO-8601 timestamp, actor (system/patient/clinician ID), event type, clip ID, consent ID if applicable, and SHA-256 of the redacted payload. - Given an authorized admin requests an audit export for a date range, Then a JSON or CSV export is generated and delivered encrypted; integrity is verifiable via a top-level checksum. - Given an integrity check runs, Then the append-only audit log with hash chaining detects any tampering attempts and reports discrepancies. - Given a legal hold is applied to a clip, Then deletion is prevented until hold removal and each state change is audit-logged.
Pipeline Atomicity and Failure Handling
- Given redaction processing fails at any step, Then no raw frames or audio are written to persistent storage; temporary artifacts are securely wiped and the user is notified within 2 seconds. - Given an upload fails transiently, Then the redacted, encrypted clip is queued with exponential backoff and an idempotency key prevents duplicate server records. - Given the app crashes during processing or upload, Then on next launch, the app securely cleans up partial files before resuming queued operations. - Given device storage is constrained, Then privacy settings are never downgraded; the system prunes non-critical footage first and blocks new capture rather than saving unredacted media.
Snippet Timeline in Clinician Dashboard
"As a clinician, I want a searchable timeline of snippets with rich context so that I can pinpoint coaching moments efficiently."
Description

Surface all captured snippets in a chronological, patient-specific timeline within the clinician dashboard. Provide filters by tag (error type, milestone, clean summary), exercise, and date, along with quick playback, variable speed, and frame-by-frame controls. Display key metrics (rep number, error classification, severity) overlaid on playback and enable lightweight annotations and bookmarks that sync back to the patient’s plan. Support bulk export/share for case reviews and integrate snippet counts and top error types into the existing session summary widgets.

Acceptance Criteria
Chronological Timeline Display per Patient
Given a clinician is viewing a specific patient's dashboard with Smart Snippets captured When the clinician opens the Snippet Timeline Then the timeline lists only that patient's snippets in reverse chronological order by capture timestamp And each snippet card shows capture timestamp (clinic-local time), exercise name, and tag (error type, milestone, or clean summary) And the timeline supports continuous scrolling or pagination to access all snippets And if the patient has no snippets, an empty-state message is shown
Multi-Filtering by Tag, Exercise, and Date
Given the Snippet Timeline is visible and snippets exist When the clinician applies filters: - Tag: one or more of [error type(s), milestone, clean summary] - Exercise: one or more exercises - Date: a start and end date Then only snippets matching all selected categories are shown (OR within a category, AND across categories) And active filters are displayed as removable chips And Clear All resets to the unfiltered timeline And the filter state persists while navigating within the patient's dashboard
Quick Playback with Variable Speed and Frame-by-Frame Controls
Given a snippet is selected from the timeline When the inline player opens Then the clinician can play/pause, scrub within the snippet, select playback speed [0.5x, 1x, 1.5x, 2x], and step forward/backward by a single frame And frame stepping advances exactly one frame at the snippet's native frame rate And playback is constrained to the snippet's start and end boundaries
Overlay of Key Metrics on Playback
Given an open snippet player with available metadata When playback starts Then an overlay displays rep number, error classification, and severity And the overlay remains synchronized to the currently displayed frame/time And severity is visually coded and labeled textually for accessibility And the overlay can be toggled on or off without affecting playback
Annotations and Bookmarks Sync to Patient Plan
Given an open snippet player When the clinician adds an annotation at a timestamp and/or adds a bookmark Then the item is saved with snippet reference, timestamp, author, and text (for annotations) And the clinician can edit or delete their items And if Share with patient is selected, the annotation/bookmark appears in the patient's plan linked to the associated exercise and snippet And all items persist and reappear after reload and sync across devices
Bulk Export/Share for Case Reviews
Given the Snippet Timeline is visible When the clinician multi-selects snippets and chooses Export/Share Then the clinician can download a ZIP containing MP4s of the selected snippets and a CSV with per-snippet metadata (snippet_id, patient_id, timestamp_iso, exercise, tag/error, severity, rep_number, duration_sec) And the clinician can generate a time-limited share link to the ZIP And a confirmation with item count is shown on success and an error state is shown on failure
Session Summary Integration of Snippet Counts and Top Error Types
Given the session summary widget is visible for a patient When Smart Snippets exist for the selected session/date range Then the widget displays total snippet count and top error types (up to three) for that range And counts and error-type tallies equal those derived by applying the same range to the Snippet Timeline And clicking a count or error type drills down to the timeline with corresponding filters applied And if no snippets exist, the widget displays zeros and hides the top error list
Network-Aware Upload & Sync Queue
"As a patient, I want uploads to adapt to my network conditions so that sharing snippets doesn’t drain data or battery."
Description

Queue snippets locally and upload using a bandwidth- and battery-aware strategy that prefers Wi‑Fi, supports user-configurable cellular use, and pauses/resumes safely. Compress uploads with target bitrates and chunked transfers, retry with exponential backoff, and deduplicate by content hash to prevent duplicates. Encrypt in transit and confirm server persistence before marking items as synced; optionally delete local copies based on retention policy. Expose clear sync status indicators and per-clip states (queued, uploading, failed, synced) to both patient and clinician views.

Acceptance Criteria
Network Preference & User-Controlled Cellular Use
Given Wi‑Fi and cellular are available and the setting "Use cellular for uploads" is Off When the sync queue starts Then uploads use Wi‑Fi only and no bytes are sent over cellular. Given Wi‑Fi is unavailable and "Use cellular for uploads" is On When the sync queue starts Then uploads proceed over cellular. Given an upload is in progress on Wi‑Fi and Wi‑Fi drops while "Use cellular for uploads" is Off When connectivity switches to cellular Then the upload pauses within 3 seconds and the item remains queued until Wi‑Fi returns. Given measured upstream throughput is below 1 Mbps for 10 seconds When multiple items are queued Then the client limits concurrency to 1 active upload and reduces per‑chunk size to 512 KB.
Battery-Aware Throttling and Pausing
Given battery level is at or below 20% and the device is not charging or Low Power Mode is enabled When sync is active Then uploads pause and each item shows state "Paused: Low battery". Given the charger is connected or battery rises above 25% and Low Power Mode is off When sync is active Then uploads resume within 10 seconds from the last acknowledged chunk. Given battery level is between 21% and 25% and the device is not charging When sync is active Then upload concurrency is limited to 1 and paced to a maximum upstream of 500 KB/s.
Safe Pause/Resume with Chunked Transfers
Given fixed upload chunk size of 4 MB and server support for resumable uploads When the app is force-closed mid-upload Then upon relaunch and network availability the upload resumes from the last acknowledged byte with no re-upload of confirmed chunks. Given airplane mode is enabled during an upload When connectivity is lost Then the upload transitions to "Paused" within 3 seconds and automatically resumes within 10 seconds after connectivity returns. Given a clip completes upload When the server-reported content hash of the assembled object differs from the client hash Then the client retries the finalization step up to 3 times and marks the item "Failed: integrity mismatch" if unresolved.
Compression Targets for Snippet Types
Given a flagged form-error snippet (3–5s) When prepared for upload Then it is encoded at 720p max with target video bitrate 1.5 Mbps ±15% and keyframe interval ≤2s. Given a clean-set summary snippet (3–5s) When prepared for upload Then it is encoded at 540p max with target video bitrate 0.8 Mbps ±15% and keyframe interval ≤2s. Given the source resolution is below the target max When encoding Then the encoder does not upscale beyond source resolution. Given encoding completes When the file is inspected Then the container is MP4 with H.264 video and the average bitrate meets the target band.
Exponential Backoff Retries and Failure Handling
Given a transient network error (timeout, 429, or 5xx) occurs When retrying the upload Then backoff delays start at 2s and double each attempt with 20% jitter up to a max delay of 120s and a max of 7 attempts per item. Given a retry subsequently succeeds When uploads continue Then the attempt counter resets and the item proceeds to completion without user intervention. Given all retry attempts are exhausted without success When the item cannot be uploaded Then the item is marked "Failed" with an error code, is eligible for manual Retry, and the queue advances to the next item.
Content-Hash Deduplication
Given a snippet is enqueued When computing its SHA-256 over the post-compression file Then if the hash matches an item already uploaded for the same account, the client skips upload, links to the existing server object ID, and marks the item "Synced (deduplicated)". Given the same snippet appears multiple times in the local queue When deduplication runs Then only one physical upload occurs and all duplicates are marked "Synced (deduplicated)" referencing the single server object. Given a previously failed item with identical content is re-enqueued When deduplication runs Then the client resumes or links without creating a duplicate remote object.
Secure Sync Completion, State Visibility, and Retention
Given an upload starts When establishing the connection Then TLS 1.2 or higher is required and certificate validation must succeed; otherwise the upload is aborted and the item is marked "Failed: TLS". Given all chunks are uploaded When finalizing the upload Then the client verifies server persistence by receiving a 2xx with object ID and validating that the server-reported content hash matches the client hash before marking the item "Synced". Given "Delete local copy after sync" is enabled When an item is marked "Synced" Then the local media file is deleted within 60 seconds while retaining metadata and thumbnail, and free device storage increases accordingly. Given the patient and clinician views are open When item states change (Queued, Uploading with progress %, Failed with reason, Synced) Then both views display the same per-clip state within 5 seconds of the change and remain accessible to screen readers.

QuietSync

Resilient, background syncing that auto-resumes the exact byte where it left off when connectivity returns. Uses checksums, versioning, and conflict resolution to prevent duplicates and data loss—even across spotty Wi‑Fi and brief hotspots. Users see a simple “All Caught Up” confirmation without babysitting progress bars.

Requirements

Resumable Chunked Transfer
"As a patient, I want my uploads to resume exactly where they left off after a disconnect so that I don’t waste time or data re-sending my progress."
Description

Implements byte-accurate, crash-safe resumable uploads and downloads for all syncable assets (exercise videos, pose-derived telemetry, session summaries, clinician notes). Transfers are split into addressable chunks with persisted checkpoints that record the exact byte offset, enabling immediate continuation after app restarts or network loss. Supports HTTP Range requests and server-side session or multipart protocols, with adaptive chunk sizing to optimize throughput on spotty Wi‑Fi and intermittent hotspots. Checkpoints and queue state are stored transactionally in a local database to guarantee at-most-once commit semantics. Works under OS background execution constraints and resumes without user interaction, ensuring seamless data flow between MoveMate clients and the backend.

Acceptance Criteria
Crash-Safe Upload Resume After App Restart
Given an asset upload is in progress and the app crashes or is force-closed When the app restarts Then the client reads the last persisted checkpoint byte offset for that asset And the next upload request resumes at exactly that byte offset using the applicable protocol (e.g., Content-Range or upload session) And no bytes before the checkpoint are re-committed on the server (verified by server-side committed byte count) And the final server-side object checksum matches the local pre-upload checksum
Byte-Accurate Download Resume After Network Loss
Given a file download is interrupted due to connectivity loss When connectivity returns Then the client issues an HTTP Range request starting at the exact missing byte offset And the server responds with 206 Partial Content and the correct Content-Range And the reassembled file length equals the expected Content-Length And the final file checksum matches the source checksum
Adaptive Chunk Sizing on Intermittent Connectivity
Given two consecutive chunk timeouts or failures within 30 seconds When sending the next chunk Then the client reduces the chunk size by at least 50% down to a minimum of 64 KiB And given ten consecutive successful chunk acknowledgements with RTT under 300 ms When sending the next chunk Then the client increases the chunk size by up to 2x capped at 2 MiB And across a 5-minute test with alternating connectivity loss, adaptive chunking achieves at least 20% higher average throughput than a fixed 1 MiB chunk size
Transactional Checkpointing and At-Most-Once Commit
Given a chunk upload is acknowledged by the server When the client persists the new checkpoint and advances the queue Then the checkpoint and queue state are written atomically in a single database transaction And if the app crashes after server ack but before transaction commit Then upon restart the client may retry the last chunk once and the server idempotently accepts or rejects it without duplicating bytes (no increase in committed byte count) And in a 100-iteration crash-injection test no transfer exhibits duplicated or skipped bytes and all uploads complete successfully
Universal Asset Coverage Across Types
Given assets of types: exercise videos up to 2 GB, pose-derived telemetry up to 50 MB, session summaries up to 5 MB, and clinician notes up to 1 MB When each asset type is queued for sync under variable connectivity Then uploads and downloads for each asset use chunked, resumable transfers And small assets that fit in a single chunk still record a final checkpoint And content-type and content-length metadata are preserved end-to-end And integrity for each asset is verified by checksum equality after transfer
Background Resume Under OS Constraints Without User Interaction
Given a 500 MB video upload in progress and the user backgrounds the app And connectivity drops for up to 10 minutes and later returns When the OS next permits background execution or the app is foregrounded Then the upload resumes from the last checkpoint without any user interaction And total duplicate bytes retransmitted due to resume equal 0 And the transfer completes with the same final checksum as the source
HTTP Range and Upload Session Protocol Compliance
Given a partial download resume is required When requesting remaining bytes Then the client sends a correct HTTP Range header and validates a 206 Partial Content response with accurate Content-Range And the assembled artifact matches the expected checksum Given an interrupted upload using a server-side session or multipart protocol When resuming Then the client queries the server for the current committed offset and aligns within one request And if an offset mismatch occurs the client reconciles to the server-reported offset without creating orphaned parts And no orphaned partial uploads remain after finalization as verified by server listing
End-to-End Integrity Checksums
"As a clinician, I want the app to verify data integrity with checksums so that I can trust rep counts and videos are accurate and uncorrupted."
Description

Provides integrity verification for every transfer using per-chunk and whole-object checksums (e.g., SHA-256). The client computes and transmits chunk digests; the server validates on receipt and returns expected digests for downloads. Mismatches trigger targeted re-transfers of only the affected chunks, preventing silent corruption and avoiding full-file retries. Final object digests are recorded for auditability and future validation. The mechanism is transparent to users and integrated into the transfer pipeline, ensuring rep counts, videos, and care-plan documents arrive unaltered.

Acceptance Criteria
Upload Per-Chunk Checksum Validation and Targeted Retransmission
Given the client is configured to compute SHA-256 digests per chunk of size S And the server validates each uploaded chunk against the provided digest When a single chunk’s bytes are corrupted in transit Then the server rejects only that chunk and requests retransmission identifying the chunk index And the client retransmits only the mismatched chunk And total retransmitted bytes equal the size of the mismatched chunk And the upload completes without re-sending previously validated chunks
Download Checksum Validation with Server-Provided Digests
Given the server provides a manifest of expected SHA-256 digests for each chunk and the final object When the client downloads the object and computes a digest for each received chunk Then matching chunks are accepted and written to a temporary file And any chunk with a digest mismatch is discarded and only that chunk is re-requested up to 3 retries And upon completion, the client computes the final object digest and compares to the manifest And if the final digest matches, the file is atomically committed; otherwise the download fails and the temp file is deleted
Exact-Byte Resume with Checksum Continuity
Given an upload was interrupted at byte offset X within chunk k And the server has persisted all bytes prior to offset X When connectivity returns and the client resumes the upload Then the client resumes from byte offset X without duplicating previously accepted bytes And the server verifies the completed chunk k by recomputing the SHA-256 across the full chunk bytes and matching the client’s advertised chunk digest And previously validated chunks (0..k-1) are not re-uploaded And total retransmitted bytes are less than or equal to (chunk_size - X)
Whole-Object Digest Recording and Audit Retrieval
Given an object upload completes with all chunks validated When the final whole-object SHA-256 digest is computed Then the system records the digest with object_id, version_id, size_bytes, algorithm, uploader_id, and timestamp And a read-only API returns these fields for the object And attempts to modify the recorded digest or metadata are rejected
No Full-File Retry on Partial Corruption
Given a file segmented into N chunks where exactly one chunk is corrupted during upload When the transfer completes with integrity checks Then the number of retransmitted chunks is less than or equal to 1 And total retransmitted bytes are less than or equal to the size of the corrupted chunk And no full-file retry occurs
User Transparency and "All Caught Up" Confirmation
Given background sync is running and integrity checks are enforced When all pending uploads and downloads pass per-chunk and whole-object validations Then the app displays "All Caught Up" And no checksum or progress details are shown to the user And transient mismatches that are auto-corrected do not produce user-visible errors And persistent failures after max retries surface a single generic sync error associated with the affected item
Future Re-Validation of Stored Objects
Given an object exists with a stored whole-object SHA-256 digest When a re-validation is initiated via an admin endpoint or scheduled job Then the server recomputes the SHA-256 over the stored bytes and compares it to the recorded value And if they match, a "Pass" result with timestamp is appended to the audit trail And if they do not match, a "Fail" result is recorded, the object is flagged for quarantine, and subsequent downloads are blocked until the issue is resolved
Conflict-Aware Versioning and Merge
"As a clinician, I want plan updates and patient edits to merge predictably across devices so that no one loses changes when we’re offline."
Description

Introduces deterministic versioning and conflict resolution across offline and multi-device edits. Records carry version stamps (e.g., hybrid logical clocks) and change types (mergeable vs non-mergeable). Mergeable fields (e.g., rep totals, adherence counters) use CRDT-style operations; non-mergeable fields default to last-writer-wins with server time ordering and optional clinician override. On conflict, the system auto-resolves when safe and surfaces a minimal, actionable review to the relevant user only when necessary. Full change history is retained for traceability. This ensures that patient-entered updates and clinician plan adjustments synchronize without data loss.

Acceptance Criteria
CRDT Merge of Rep Totals Across Offline Multi-Device Edits
Given a patient logs 12 reps offline on Device A and 8 reps offline on Device B for the same exercise When both devices reconnect and QuietSync completes Then the rep_total equals previous_total + 20, And no duplicate rep events are present in history, And the final version stamp is strictly greater than both local version stamps And the merged outcome is identical across 5 randomized arrival orders of the two update streams
LWW Resolution for Non-Mergeable Exercise Instruction Text with Clinician Override
Given a clinician updates the exercise instruction text online at T1 and a patient edits the same field offline at T2 And server-time ordering places T1 after T2 When both changes sync Then the server keeps the clinician value by last-writer-wins, And the patient value is preserved in change history with author and version stamp And exactly one review card is shown to the clinician offering an "Apply patient edit" override And choosing override creates a new version reflecting the patient value and records the override event in the audit log
Mixed Merge: Mergeable Adherence Counter and Non-Mergeable Target Reps Edited Concurrently
Given a patient increments the adherence_counter by 1 offline while a clinician updates target_reps from 10 to 12 online When the record syncs and is merged Then adherence_counter equals previous + 1 (CRDT merge), And target_reps equals 12 (server-time LWW) And only the non-mergeable target_reps change generates a single review for the clinician And no review is shown to the patient, And the app displays "All Caught Up" for the patient after merge completes
Deterministic Versioning Under Clock Skew and Reordered Delivery
Given two devices with 5-minute clock skew make concurrent edits to the same record while offline And updates are delivered to the server in 10 different randomized orders When merges are applied using hybrid logical clock (HLC) metadata Then the final record state is identical across all orders, And per-record version stamps are monotonic, And re-applying the same updates is idempotent (the record hash is unchanged), And no merge produces two different winners for the same non-mergeable field
Full Change History Visibility and Safe Revert
Given a record with at least 5 prior versions including merges When a clinician opens history Then each version shows version stamp, timestamp, author, change summary, and change type (mergeable/non-mergeable) And selecting "Revert to vN" creates a new version vN+1 that matches the snapshot of vN without deleting intermediate versions And the audit log records the revert actor, timestamp, and reason And mergeable counters remain consistent (no double-application of past increments) after the revert
Minimal, Targeted Conflict Review and Notification Scope
Given a conflict that cannot be auto-resolved safely occurs When QuietSync completes Then only the responsible role (e.g., clinician for plan conflicts; patient for their own profile conflicts) sees a single actionable review card within 2 seconds And no other roles receive a notification for that conflict, And dismissing or resolving the card syncs its state across the user’s devices And the card shows only the field, current value, proposed value, and Accept/Reject actions
Idempotency and Deduplication
"As a clinic administrator, I want the system to avoid duplicate records during retries so that reporting and patient histories remain accurate."
Description

Ensures retries never create duplicate records by using client-generated UUIDs and idempotency keys for all mutations. The server treats repeated submissions with the same key as safe replays, returning the original result. The client also performs local deduping of transient events (e.g., form-error flags emitted rapidly) using content hashes and time windows. This eliminates duplicate sessions, messages, or telemetry caused by unstable networks, keeping clinician dashboards and patient histories clean and accurate.

Acceptance Criteria
Retried Mutation Returns Original Result Without Duplicates
Given a client-generated UUID sessionId and idempotency key K for POST /sessions And the initial request was committed server-side but the client did not receive the response When the client retries with the same body and idempotency key K Then the server returns HTTP 200 with the original response body including sessionId and version And the database contains exactly one session with sessionId And the response headers include Idempotency-Key: K and Idempotency-Replayed: true
Concurrent Duplicate Submissions Are Coalesced
Given two identical mutation requests with the same idempotency key K arrive concurrently When the server processes them Then exactly one mutation is executed And both responses include the same resource identifier and version And both requests return HTTP 200 And system metrics record one execution and one replay for K
Client Dedupes Rapid Form-Error Telemetry
Given N >= 2 identical form-error events (same exerciseId, field, errorCode, and content hash) occur within a 3-second rolling window When the client publishes telemetry Then only one event for that dedup key is sent upstream within the window And local analytics log the dedup action with the dedup key and window duration And upstream receives at most one such event per 3-second window
Idempotency Key With Different Payload Yields Conflict
Given a prior successful mutation exists for idempotency key K with checksum C1 When a request arrives with the same K but a different checksum C2 Then the server returns HTTP 409 Conflict with error code IDEMPOTENCY_PAYLOAD_MISMATCH And the response echoes the original checksum C1 and resource identifier And no new record is created and no existing record is modified
Replay Window Enforcement For Idempotency Keys
Given an idempotency record for key K exists When an identical request with K arrives within the configured replay window of 24 hours Then the server returns the original response with Idempotency-Replayed: true And when an identical request with K arrives after 24 hours Then the server returns HTTP 422 Unprocessable Entity with error code IDEMPOTENCY_KEY_EXPIRED And no mutation is performed
Offline Queue Replays Apply Each Mutation At Most Once
Given the client queued M mutations with unique idempotency keys K1..KM while offline And some were partially transmitted before disconnect When connectivity is restored and the client replays the queue Then each mutation is applied at most once on the server And the final server state reflects exactly one application per idempotency key And responses for retried submissions include Idempotency-Replayed: true
Adaptive Offline Sync Scheduler
"As a patient, I want the app to sync quietly in the background while respecting my battery and data plan so that it doesn’t disrupt my day."
Description

Provides a background-first scheduler that prioritizes small, critical items (rep counts, flags) ahead of large media, with exponential backoff and jitter, network-type gating (unmetered vs metered), and battery-aware behavior. Detects connectivity changes to opportunistically resume transfers, coalesces queued mutations into batches, and persists schedule state across app restarts. Honors OS background limits (iOS/Android) and respects user settings for cellular data usage, ensuring QuietSync behaves politely while remaining timely and reliable.

Acceptance Criteria
Critical Mutations Prioritized Over Media Uploads
Given a pending queue containing at least one critical mutation (<=50 KB payload) and at least one media upload (>=1 MB), and an active network connection When the scheduler selects the next work item Then it dispatches all critical mutations before starting any media upload And if a media upload chunk is in progress when new critical mutations arrive, it completes the current chunk (<=2 MB) then schedules the critical mutations to start within 3 seconds And no critical mutation waits more than 5 seconds to start while network is available And critical mutations are processed FIFO by enqueue timestamp
Exponential Backoff with Jitter on Transient Failures
Given a request attempt fails with HTTP 429, any 5xx, or network timeout When the scheduler retries Then the delay follows exponential backoff starting at 1 second, doubling each attempt up to a maximum delay of 60 seconds And each delay includes jitter of ±50% And backoff resets to 1 second after a successful attempt to the same endpoint And the next scheduled retry time and attempt count are persisted before app backgrounding
Network-Type Gating and User Cellular Preference
Given the device is on a metered network (cellular or metered Wi-Fi) and the user setting "Use cellular for large uploads" is Off When scheduling sync Then only critical mutations (<=50 KB) are transmitted; media uploads are deferred And on unmetered networks, all eligible work (critical and media) is allowed And if the user turns the setting On while on metered, deferred media uploads start within 5 seconds And network-type changes are detected and gating is re-applied within 2 seconds without losing or duplicating queued items
OS Constraints and Battery-Aware Scheduling
Given the app is backgrounded or the screen is off When scheduling work Then on iOS, tasks run via BGTaskScheduler and complete or cancel before the expiration handler; no single task exceeds its allowed background time And on Android, work is enqueued via WorkManager with NetworkType and BatteryNotLow constraints; no unsupported background services are started And when battery <=15% or power saver is ON and the device is not charging, media uploads are paused and critical mutations continue And when charging or battery >=20% and power saver is OFF, deferred work resumes within 10 seconds And no wake-lock or background execution warnings are logged during a 30-minute background test
Connectivity Loss and Byte-Accurate Resume
Given a media upload is interrupted after N bytes due to loss of connectivity When connectivity returns Then the scheduler resumes the upload from byte offset N using server-supported range requests (e.g., ETag/If-Range) And the completed object checksum on the server matches the local checksum And no duplicate or missing bytes are detected by server byte count or checksum And partial progress is persisted such that a full app restart still resumes from N
Schedule State Persistence Across App Restarts
Given there are queued items and scheduled retries with backoff When the app is terminated and relaunched by the OS or user Then the scheduler restores the queue, priorities, and next-run times from persistent storage within 3 seconds of app start And item identities are stable; no item is duplicated or dropped And previously accrued backoff delays are honored (no unintended reset)
Mutation Batching and Coalescing
Given multiple rep-count and form-flag mutations for the same patient/session are enqueued within a 3-second window or until 10 items accumulate, whichever comes first When the scheduler dispatches them Then it sends a single batch request containing all mutations in FIFO order And mutations on the same counter are coalesced into a single aggregated delta that matches the sum of individual mutations And the batch includes an idempotency key so that retries do not create duplicates And if the window elapses with fewer than 10 items, the batch is still sent within 3 seconds
Authentication Continuity in Background
"As a user, I want syncing to continue even if my session refreshes in the background so that I stay up to date without needing to sign in again."
Description

Maintains uninterrupted sync by securely handling token refresh and reauthentication during background execution. Access tokens are rotated proactively; 401/403 responses pause the queue, perform silent refresh when possible, and resume without user intervention. Tokens are stored in the platform’s secure enclave/keystore, and all background requests use the latest credentials. Graceful degradation is provided when accounts are revoked or passwords change, with clear, minimal prompts only when user action is required. This prevents sync stalls due to session expiry.

Acceptance Criteria
Silent Token Refresh During Background Sync
Given the app is in background with an active sync task and a valid refresh token in secure storage And the access token expires within 5 minutes or a 401/403 is received When the refresh flow is triggered Then a new access token is obtained without presenting any UI And the failed request is retried within 2 seconds using the new token and succeeds if the server is available And the sync task continues processing the queue without user interaction And the new token is persisted to the secure enclave/keystore
401/403 Queue Pause and Replay
Given a background request returns 401 or 403 When the response is processed Then the sync queue pauses before issuing any further requests And a silent token refresh is attempted up to 3 times with exponential backoff over 30 seconds And upon successful refresh, the original request is retried exactly once with the same idempotency key And the queue resumes and processes remaining items And no duplicate writes occur on the server for the retried operation
Secure Storage of Credentials
Given tokens are stored in the platform secure enclave/keystore When tokens are created, updated, or read Then storage uses hardware-backed, non-exportable keys (Secure Enclave/StrongBox where available) And tokens are readable only by the app process and after device unlock per platform policy And tokens are never written to logs, crash reports, or analytics And on uninstall or keychain reset, tokens are irrecoverably deleted
Latest Credentials on All Background Requests
Given a token refresh produces a new token T2 at time t_refresh When any background request is started after t_refresh Then its Authorization header uses T2 And no background request started after t_refresh contains the previous token And requests started before t_refresh may complete; if they fail with 401/403 they are retried after refresh using T2
Proactive Token Rotation Pre-Expiry
Given an access token with expiry t_exp When the device has network connectivity and time is within 5 minutes of t_exp Then the app refreshes the token once in the background before expiry And concurrent refresh attempts are coalesced so only one refresh occurs And no more than one proactive refresh is attempted within a 10-minute window And if offline within the 5-minute window, the refresh is attempted within 2 seconds of connectivity restoration
Graceful Degradation on Revocation or Password Change
Given a refresh attempt returns invalid_grant, revoked_client, or an equivalent account-revoked/password-changed error When processing the background sync queue Then the queue is stopped and marked "auth required" And no further background requests are sent until reauthentication succeeds And the next foreground session shows a single minimal prompt guiding the user to reauthenticate And the sync status shows a clear message (e.g., "Sign in required") instead of "All Caught Up"
User Prompts Only When Action Is Required
Given transient network errors or timeouts occur during refresh or request retry When the auth state is still valid or recoverable without user input Then no user prompt, notification, or login screen is shown And the system retries silently according to backoff policy And only when the server definitively indicates credentials are invalid does a prompt appear, at most once per app foreground session
Minimalist ‘All Caught Up’ UX with Diagnostics
"As a patient, I want a simple “All Caught Up” message and only actionable alerts when needed so that I don’t have to monitor progress bars."
Description

Delivers a single, trustworthy sync state indicator that reads “All Caught Up” when no items are pending, avoiding progress bars and micro-status noise. When attention is required, users see concise, actionable alerts with one-tap retry. A hidden Diagnostics panel offers last sync time, pending counts, recent errors, and network status for support scenarios, with no PHI in logs. The UX is accessible, localized, and consistent across platforms, reinforcing confidence that QuietSync works without babysitting.

Acceptance Criteria
Display 'All Caught Up' when no pending sync items
Given the device is online or has just reconnected and the QuietSync queue is empty When background sync completes or the app returns to foreground Then the primary sync indicator shows the localized "All Caught Up" string with a check icon And no progress bars, percentages, or item-level statuses are visible And the state persists unless new items enter the queue And a screen reader announces the "All Caught Up" state
Actionable alert with one-tap retry on recoverable error
Given a recoverable sync error occurs during background or foreground sync When the user views the app Then a concise alert is shown with a single primary action labeled "Retry" And tapping Retry restarts syncing from the last verified byte And the alert auto-dismisses on successful completion And no duplicate items are created after retry
Hidden Diagnostics panel reveals support info without PHI
Given the user opens Diagnostics via an intentional gesture or menu When the Diagnostics panel renders Then it shows last sync time in local timezone, pending upload and download counts, recent error codes and messages for the last 10 events, current network status, app version and build, and sync engine version And no PHI is present in any field or log output And a Copy Diagnostics action copies only these non-PHI fields And closing the panel returns to the prior screen state
Localization of sync states and alerts
Given the device language is a supported locale When the app displays "All Caught Up", "Syncing...", or an error alert Then all strings are fully localized and pluralized correctly with a fallback to English if a translation is missing And date and time follow the device locale format And right-to-left locales mirror layout appropriately without truncation or overlap
Accessibility and contrast compliance
Given system accessibility features such as screen reader, larger text, high contrast, or reduce motion are enabled When the user interacts with the sync indicator, alerts, or Diagnostics Then elements expose correct roles, names, and hints, and focus order is logical And all actions are operable via keyboard and switch control And text contrast is at least 4.5:1, icon contrast at least 3:1, touch targets at least 44x44 points, and content scales with dynamic type without clipping And non-essential animations are disabled when Reduce Motion is enabled
Cross-platform consistency (iOS, Android, Web)
Given the same account is used across supported platforms When sync completes or errors occur Then copy, iconography, and entry points to Diagnostics are consistent within platform conventions And last sync time and pending counts match within 5 seconds across platforms And status transitions occur in the same sequence across platforms
No micro-status noise during active sync
Given one or more items are pending or actively syncing When the app is in the foreground Then display at most one compact "Syncing..." state with an indeterminate spinner and no per-item details or percentages And automatically switch to "All Caught Up" within 1 second of the queue becoming empty And if foreground syncing exceeds 15 seconds, display a non-intrusive hint "Working in background" without a progress bar

SafeHold Offline

On-device safety rules escalate repeated high-risk form flags into a temporary pause with clear next steps—try an easier variant, shorten range, or rest—until conditions improve. Logs the event for clinician review on next sync. Protects patients when no live supervision is available.

Requirements

On-device Risk Detection Engine
"As a patient performing home exercises, I want the app to detect unsafe form in real time even when I’m offline so that I don’t injure myself without live supervision."
Description

Implements an on-device safety detection pipeline that evaluates each rep in real time to identify high-risk form deviations without internet access. Integrates MoveMate’s computer-vision outputs with exercise-specific safety thresholds and severity levels, producing a stream of risk flags consumable by escalation logic. Operates within mobile performance budgets (low latency, minimized CPU/battery), degrades gracefully under occlusion or poor lighting, and keeps raw video on-device for privacy. Supports the full exercise taxonomy and clinician-prescribed variants.

Acceptance Criteria
Realtime Rep Risk Flagging Offline
Given the device is in airplane mode and a supported exercise session with a clinician-prescribed variant is started When the user completes 20 reps over 3 minutes Then a risk evaluation is emitted for at least 95% of reps And the risk evaluation for each rep is available within 150 ms of rep end (95th percentile) And computer-vision processing maintains at least 24 FPS average during the session And zero network requests are made during the session
Risk Flag Schema and Severity Determinism
Given local safety thresholds for the exercise are loaded When deviations of 1.0x, 1.5x, and 2.0x the high-risk threshold are observed on reps 3, 4, and 5 Then the emitted flags contain fields: exercise_id, variant_id, rep_index, timestamp_ms, deviation_type, deviation_magnitude, severity in {low, medium, high, critical}, confidence in [0,1], reason_code, frame_range And the corresponding severities are low, medium, and high respectively And repeated runs over the same recorded input produce identical flag contents aside from timestamp_ms
Performance Budgets: Latency, CPU, Battery, Memory
Given a 10-minute continuous session on reference devices (iPhone 12+ and Pixel 6+) at 50% screen brightness When the on-device risk detection engine runs with video input at 30 FPS Then average CPU usage is <= 35% and peak <= 70% And 95th percentile per-frame inference latency is <= 30 ms And battery drain is <= 6% over the session And app RSS memory usage stays <= 300 MB And there are zero ANRs or crashes
Graceful Degradation Under Occlusion/Poor Lighting
Given pose confidence drops below 0.5 or frame exposure/blur quality scores are below thresholds for >= 1 second When the user continues movement Then the engine emits input_quality flags within 500 ms indicating low_visibility with confidence <= 0.5 And no high or critical risk flags are emitted while confidence < 0.6 And normal risk evaluation resumes within 1 second after confidence >= 0.6 for at least 1 second
Privacy: On-Device Media Containment
Given network connectivity is available and a 5-minute session is recorded When the engine processes video frames Then no raw frames or video are written outside the app sandbox And no network requests are made that upload image/video/pose keypoint payloads And logs contain only derived metrics and flags with no pixel data And temporary frame buffers are cleared within 5 seconds of session end
Exercise Taxonomy and Variant Overrides
Given the full exercise taxonomy and local safety configs are installed When sessions are run for each exercise and clinician-prescribed variants with overridden thresholds or ROM constraints Then 100% of exercises load a matching safety profile without fallback defaults And variant-level overrides are applied and reflected in emitted flags' threshold_ids And any missing or invalid config produces a non-blocking config_error flag and does not crash the session
Local Safety Event Logging and Sync Readiness
Given high or critical risk flags are emitted during a session When the session ends and the app is terminated and relaunched Then a safety_events log exists with entries persisted for each high/critical flag, including fields: exercise_id, variant_id, rep_index, timestamp_ms, severity, reason_code, confidence, frame_range, session_id, device_id And the log size is bounded to 10,000 events with oldest-first eviction And on next connectivity, the log is available to the sync module via a local API and entries are marked as synced upon handoff
Escalation & Hysteresis Rules
"As a clinician, I want repeated high-risk form issues to automatically trigger a pause with clear guidance so that patients stop before risking injury."
Description

Provides configurable escalation logic that aggregates repeated high-risk flags within a sliding time/rep window to trigger a SafeHold pause. Includes per-exercise and per-patient thresholds, severity weighting, hysteresis to prevent pause flapping, cooldown timers, and state persistence across app backgrounding. Integrates with the detection engine and writes structured events to the offline log.

Acceptance Criteria
Trigger pause on threshold breach (weighted sliding window)
Given the effective configuration window_time=60s, window_reps=20, severity_weights={low:1, med:2, high:3}, escalation_threshold=8 And the patient is performing exercise E with per-patient and per-exercise overrides applied When high-risk flags with severities [low, high, med, high] occur within the active window Then the weighted sum within the window >= 8 triggers a state transition from Active to Paused And the UI displays next-step options: "Try easier variant", "Shorten range", "Rest" And an offline "pause_triggered" event is written with fields: patient_id, exercise_id, timestamp, window_type, window_values, severity_weights, escalation_threshold, observed_counts, weighted_sum, state_from=Active, state_to=Paused, app_version
Configuration precedence and effective threshold resolution
Given defaults escalation_threshold=10, exercise E override=12, patient P override=8, and precedence patient > exercise > default When P performs E Then the effective escalation_threshold is 8 And when P has no override, the effective escalation_threshold is 12 And changes to overrides take effect within 1 second and before the next window evaluation without restart
Hysteresis prevents pause flapping
Given pause_threshold=8 and resume_threshold=4 and resume_requirements: time_below_threshold>=15s AND clean_reps>=5 And the current state is Paused due to threshold breach When the weighted sum remains <= 4 for 15s and 5 consecutive reps have no medium or high severity flags Then the state transitions to Active And the system does not toggle Paused/Active more than once within any 10-second period
Cooldown after resume suppresses immediate re-trigger
Given cooldown_time=30s and cooldown_reps=10 And the state just transitioned from Paused to Active When new high-risk flags occur during the cooldown Then no new pause is triggered until both 30s have elapsed AND 10 additional reps are completed And after cooldown completion, escalation detection resumes using current configuration
State persistence across backgrounding and restore
Given current state=Paused with remaining_cooldown=12s and active window counters present When the app is backgrounded for 20s and later foregrounded offline Then state remains Paused and remaining_cooldown is reduced by elapsed real time and clamped at zero And the sliding window is advanced by 20s, expiring out-of-window flags accordingly And a "state_restored" event is logged with persisted counters, timestamps, and state_before/after
Structured offline event logging and durability
Given the device is offline and local storage has at least 1 MB free When a pause is triggered and later cleared (resumed) Then the offline log contains ordered events ["pause_triggered","pause_cleared"] with fields: sequence_id, patient_id, exercise_id, session_id, device_id, timestamps, reason, window_type, thresholds, metrics(weighted_sum, severity_counts), duration_sec(for cleared), app_version, uploadable=true And each event is fsynced within 100ms and survives an immediate app crash and device reboot And events maintain monotonic sequence_id ordering
Robust aggregation from detection engine inputs
Given the detection engine emits rep_indexed events and high-risk flags with severities {low, med, high} And frame drops and timestamp jitter up to 100ms may occur When events arrive out-of-order or with duplicates for the same rep Then the aggregator de-duplicates by rep_index, keeps the highest severity per rep, and computes window by time and/or rep count accurately And if no detection events arrive for > 2s, the aggregator halts accumulation but does not auto-resume to Active or auto-pause; state remains unchanged
SafeHold Pause Guidance Modal
"As a patient, I want clear instructions on what to do when the app pauses me so that I can safely continue my program without guessing."
Description

Delivers a full-screen SafeHold pause UI with clear, actionable next steps tailored to the current exercise: try an easier variant, shorten range of motion, or rest. Provides concise rationale for the pause, a large countdown rest timer, haptic/audio alerts, and accessible controls that work hands-free. Content is localized, pre-cached for offline use, and aligned with clinician-prescribed alternatives.

Acceptance Criteria
Offline SafeHold Pause Triggered During Exercise
Given the user is performing a clinician-prescribed exercise offline and SafeHold detects N consecutive high-risk form flags within a 20-second window When the configured threshold is reached Then a full-screen SafeHold pause modal appears within 500 ms And the modal displays a concise rationale describing the detected issue And the modal presents three actionable options: "Try easier variant", "Shorten range", and "Rest" And exercise tracking and rep counting are paused until the user takes an action on the modal
Guidance Options Match Clinician Alternatives
Given the current exercise has clinician-prescribed alternatives and ROM guidance synced to the device When the SafeHold pause modal is shown Then "Try easier variant" maps to the clinician-specified alternative for this exercise; otherwise a default easier variant is shown And "Shorten range" displays the clinician-specified ROM bounds; otherwise default 75% range guidance is shown And choosing either option updates the active exercise configuration immediately and is logged locally with timestamp and selection type for next sync
Rest Timer Countdown and Alerts
Given the user selects "Rest" on the SafeHold modal When the rest starts Then a large countdown timer (minimum text height 64 dp) is visible and begins from the configured duration (default 60s) And a haptic alert and audio tone play at rest start and rest completion (audio is suppressed when the device is in silent; haptics still play) And the user can add +30s or skip via a single action (button or voice) while the timer is running And when the timer completes, the modal presents "Resume" and announces completion
Hands-Free Voice Controls Offline
Given the device is offline and the SafeHold modal is visible When the user speaks one of the supported commands: "Rest", "Resume", "Easier", "Shorten", or "More info" Then the command is recognized on-device within 1 second and confirmed via an audible or haptic acknowledgment And the corresponding action is executed without requiring touch input And if voice recognition fails after two attempts, the app surfaces a prompt to use on-screen controls
Localized Pre-Cached Content
Given the device language is supported and the app is offline When the SafeHold modal is presented Then all strings, audio prompts, and help content display in the device language using only pre-cached assets And if the device language is not supported, content falls back to English without missing or placeholder strings And no network requests are initiated during modal display
Accessibility and Screen Reader Support
Given a screen reader (VoiceOver or TalkBack) is enabled When the SafeHold modal opens Then initial focus lands on the pause rationale, followed by actionable options and the rest timer in a logical order And all controls have descriptive accessibility labels and hints, with tap targets at least 44x44 pt and color contrast ratios >= 4.5:1 And the countdown timer exposes remaining time to assistive technologies and announces the final 3 seconds
Resume Conditions & Reassessment
"As a patient, I want to know when it’s safe to resume and have the app confirm my form has improved so that I regain confidence and stay adherent."
Description

Defines objective, measurable conditions required to exit SafeHold, such as achieving a set number of consecutive safe reps or explicit user confirmation after adjusting technique or variant. Continuously reassesses form during the resume attempt, escalates severity or suggests contacting the clinic after repeated failures, and records outcomes and modifications taken for clinician review.

Acceptance Criteria
Exit after Consecutive Safe Reps
Given SafeHold is active for an exercise and the offline default threshold is 5 consecutive safe reps within 3 minutes When the user attempts to resume and the system detects reps in real time And each rep is classified with no high-risk form flags And the consecutive safe-rep count reaches 5 before 3 minutes elapse Then SafeHold exits and the exercise resumes And the safe-rep streak counter resets after exit But if a high-risk flag occurs during the attempt, reset the consecutive safe-rep count to 0 and continue reassessment And record a time-stamped exit event for clinician review
Exit after Technique/Variant Adjustment + Validation
Given SafeHold is active and the user selects an adjustment (Try easier variant, Shorten range, or Rest) And the app records the selected adjustment and captures one-tap user confirmation When the user confirms and attempts to resume And the system detects 2 consecutive safe reps within 60 seconds post-confirmation Then SafeHold exits and resumes the exercise And the chosen adjustment, confirmation timestamp, and validation metrics are recorded But if 60 seconds elapse without 2 consecutive safe reps, SafeHold remains active and guidance is re-shown
Continuous Reassessment During Resume Attempt
Given a resume attempt is in progress When any rep triggers a high-risk flag Then increment a failure counter and reset the safe-rep streak to 0 And display immediate feedback of the failure And if the failure counter reaches 3 within 90 seconds or within 10 total rep attempts (whichever comes first), mark the resume attempt as failed
Escalation to Extended Pause on Repeated Failures
Given a resume attempt is marked as failed per thresholds When escalation is triggered Then enforce a 2-minute cooldown where resume controls are disabled And increase severity tier by 1 (maximum tier 3) And display next-step guidance with a visible countdown timer And record the escalation with failure counts, timestamps, and severity tier
Clinic Contact Suggestion After Multiple Escalations
Given either severity tier 3 is reached or there have been 2 escalations within a 24-hour period When the user next attempts to resume Then present a Contact your clinic prompt with Call and Message actions And block resume until the user acknowledges the prompt and a 5-minute cooldown completes And log the suggestion event, user action (called, messaged, dismissed), and delivery timestamp
Offline Logging of Outcomes and Modifications
Given any SafeHold exit, failed resume, escalation, or clinic-contact suggestion occurs while offline When the event is recorded Then store an encrypted log entry containing: patient pseudonymous ID, exercise ID, timestamps, thresholds used, selected adjustment (if any), outcome, safe/unsafe rep counts, failure counter, severity tier, and app version And on next connectivity, sync the log within 10 minutes and mark it as delivered And retain up to 200 unsynced events locally using FIFO if capacity is exceeded
Offline Event Logging & Secure Sync
"As a clinician, I want detailed SafeHold logs synced automatically so that I can review issues and adjust the plan without needing a live session."
Description

Captures each SafeHold event and context offline, including timestamps, exercise IDs, severity, triggers, anonymized key pose metrics, and optional blurred thumbnails per consent settings. Stores data encrypted at rest, queues for reliable sync with retry and backoff, ensures idempotent delivery, and surfaces events in the clinician dashboard upon connectivity with concise summaries and an audit trail.

Acceptance Criteria
Offline SafeHold Event Capture Completeness
Given the device has no network connectivity and a SafeHold pause is triggered during an exercise When the SafeHold event is recorded Then the app persists a single event within 500 ms containing non-null fields: event_id (UUID), timestamp (UTC ISO 8601 with milliseconds), exercise_id, severity, trigger_code, anonymized_pose_metrics, consent_thumbnail_flag, and local_queue_status='pending' And the timestamp is captured within ±100 ms of the trigger time using a monotonic reference And anonymized_pose_metrics contains no raw images or personally identifying text And rapid successive events (≤1 s apart) are recorded as distinct entries with unique event_id values and ordered by timestamp
Consent-Based Media Handling for Thumbnails
Given patient consent for thumbnails is disabled When a SafeHold event is recorded Then no thumbnail bytes are saved, thumbnail_uri is null, consent_thumbnail_flag=false, and reason_code='no_consent' Given patient consent for thumbnails is enabled and camera permission is granted When a SafeHold event is recorded Then a single blurred thumbnail is saved with max resolution 320x320, Gaussian blur radius ≥ 8 px, EXIF metadata stripped, thumbnail_uri populated, and consent_thumbnail_flag=true Given consent is enabled but camera permission is denied or capture fails When a SafeHold event is recorded Then the event is logged without a thumbnail and failure_reason='capture_unavailable' is recorded without preventing event capture
On-Device Encryption at Rest
Given at least one pending SafeHold event exists on the device When the on-device event store is inspected outside the app process Then event payloads and any thumbnails are encrypted at rest using keys from the OS secure keystore and direct file reads yield unintelligible ciphertext And on app kill and relaunch, pending events remain intact and readable only via the app after key access is re-established And no event data is written to plaintext logs or temporary caches
Reliable Queueing, Retry, and Backoff
Given the device is offline or the server returns a 5xx response or a network timeout occurs When the client attempts to sync pending events Then the events remain marked 'pending' and the client retries with exponential backoff (growth factor ≥ 2, maximum delay 5 minutes, ±20% jitter) until a successful 2xx response is received And retry scheduling and queue state persist across app restarts and OS reboots And no pending event is dropped, overwritten, or reordered while awaiting sync
Idempotent Delivery and De-duplication
Given an event with idempotency_key K is submitted to the server When the same event is retried one or more times due to uncertain delivery Then at most one server-side event is created and subsequent submissions return a success response indicating prior acceptance without creating duplicates And the client marks the event 'synced' upon the first 2xx response and does not create new local entries during retries And the clinician dashboard shows exactly one event entry; the audit trail records total attempt count and dedup_status='replay'
Connectivity Recovery Auto-Sync
Given there are N>0 pending events on the device When network connectivity becomes available Then syncing starts automatically within 30 seconds and uploads events in chronological order by event timestamp until N=0 And if syncing is interrupted, it resumes from the last confirmed event without duplicating already-acknowledged events And local queue counters reflect accurate state transitions: pending decreases and synced increases per successful upload
Clinician Dashboard Summaries and Audit Trail
Given at least one SafeHold event has been successfully synced When a clinician opens the patient's dashboard Then a concise summary entry appears within 15 seconds of server receipt showing exercise_id or name, event timestamp (patient local time), severity, trigger summary, anonymized metrics summary, and a thumbnail indicator (present or absent per consent) And selecting the summary opens an audit trail showing event_id, created_at (device), enqueued_at, each delivery_attempt timestamp, received_at (server), dedup_status, and a payload checksum And all event listings are sorted by received_at descending with timezone-aware timestamps
Patient Override & Emergency Options
"As a patient, I want a guarded way to continue if I believe the pause is a false alarm so that my session isn’t blocked while still keeping my clinician informed."
Description

Provides a controlled override flow allowing a patient to continue after acknowledging risks, with rate limits and temporary lockout after excessive overrides. Offers quick access to clinic contact information and safety resources, and always records override reasons and confirmations. All flows function offline and meet accessibility standards.

Acceptance Criteria
Override Acknowledgment Flow Offline
Given the device is offline and SafeHold has paused an exercise due to repeated high-risk flags When the patient taps "Override" on the pause screen Then display a risk acknowledgment sheet with: (a) a plain-language risk statement, (b) a required "I understand" checkbox unchecked by default, and (c) "Continue Anyway" disabled until the checkbox is checked Given the acknowledgment sheet is shown When the patient checks the checkbox and taps "Continue Anyway" Then the exercise resumes within 3 seconds, the override is recorded with timestamp, exerciseId, riskFlagType, and acknowledgment=true, and next-step suggestions (easier variant, shorten range, rest) remain accessible Given the acknowledgment sheet is shown When the patient closes the sheet without confirming Then the override is canceled and the exercise remains paused
Override Rate Limiting and Temporary Lockout
Rule 1: A patient may perform at most 2 overrides in any rolling 10-minute window per exercise Rule 2: On the 3rd override attempt within that window, disable the override action and display a lockout banner with a countdown for 15 minutes Rule 3: Lockout state persists across app restarts and offline/online transitions and automatically clears when the timer expires Rule 4: Each override attempt and lockout event is logged with timestamps and counts for clinician review
Emergency Contact Quick Access
Given the SafeHold pause screen is visible When the patient opens the "Emergency & Contact" panel Then "Call Clinic" and "Text Clinic" actions are available within 2 taps from the pause state, populated with an offline-cached clinic phone number and prefilled SMS template Given device telephony capabilities vary When the device lacks SMS support Then hide "Text Clinic" and keep "Call Clinic" available Given the patient taps "Call Clinic" Then the system dialer opens with the clinic number; Given the patient taps "Text Clinic" Then the default messaging app opens with the prefilled draft; In both cases, an audit record is queued and the pause state remains intact upon returning to the app
Safety Resources Offline
Rule 1: The pause screen provides a "Safety Resources" section with at least three items: Try an easier variant; Shorten range of motion; Rest and retry later Rule 2: All resource content (titles, descriptions, thumbnails) is available offline and each item opens within 2 seconds on a mid-range device Rule 3: Each item is accessible via screen readers with meaningful labels and alt text Rule 4: Opening a resource does not dismiss the pause state and is logged for clinician review
Override Reason Capture and Sync Logging
Given an override confirmation is submitted When the patient selects a reason from a list or enters free text (minimum 5 characters) Then the "Submit" button enables and the reason is required to proceed Given the override is completed Then an event is stored offline including: patientId, sessionId, exerciseId, riskFlagType, acknowledgment=true, reason, timestamp (ISO-8601), deviceId, and a unique eventId (UUID) Given connectivity becomes available When background sync runs Then the event is delivered to the server with retry (exponential backoff up to 24 hours), is duplicate-safe via eventId, and appears in the clinician review queue; failures do not block app use and remain queued
Accessibility Compliance for Override and Emergency Flows
Rule 1: All override/emergency screens meet WCAG 2.2 AA: color contrast ≥ 4.5:1, touch targets ≥ 44x44 pt, focus order matches visual order, and no information is conveyed by color alone Rule 2: Screen reader labels exist for all controls; the risk statement reads as one coherent block; buttons have descriptive names (e.g., "Continue Anyway — acknowledge risk") Rule 3: Supports dynamic text up to 200% without loss of content or functionality; critical actions remain visible and operable Rule 4: All interactions are operable via VoiceOver/TalkBack and Switch Control; haptic/audio feedback accompanies entering/exiting lockout state

QR Handoff

One-time, offline transfer of session summaries to a clinician or kiosk via rotating QR codes and local BLE, no internet required. Perfect for home visits: share rep totals, flagged clips, and notes in seconds at the doorstep. Data is scoped, signed, and expires after use to protect privacy.

Requirements

Rotating Signed QR Token
"As a patient during a home visit, I want to display a secure, short-lived QR code so that my clinician can pull my latest session summary quickly without exposing my private data."
Description

Generate rotating, time-limited QR codes that encode a signed, single-use capability token referencing the local session summary export. Tokens include scope (specific session IDs), expiration timestamp, and a one-time nonce to prevent replay. The QR rotates every few seconds to reduce shoulder-surfing risk while tolerating minor clock drift. No PHI is embedded in the QR payload; only minimal metadata and a signature. The token bootstraps a secure offline handoff by conveying service identifiers and key material needed to initiate the local transfer.

Acceptance Criteria
Signed Token Payload Structure
Given a generated QR token payload When decoded Then it contains only: scope_session_ids (opaque IDs), exp_timestamp (UTC seconds), nonce (128-bit random), service_id, receiver_bootstrap (ephemeral_public_key or equivalent key material), alg, signature And no PHI fields are present (e.g., name, DOB, notes, media) And the canonical serialized payload length is <= 512 bytes And the signature covers the canonical payload fields exactly
QR Rotation Cadence and Lifetime
Given the QR display is active When observed over a 60-second interval Then the displayed QR changes at least every 3 seconds ±0.5 seconds And each token remains valid for 30 seconds from its creation time And no token value repeats within the observation window
Replay Protection via One-Time Nonce
Given a token is successfully consumed by a receiver When the same token (same nonce) is presented again within its validity period Then the receiver rejects it as a replay with a specific error code And the sender does not re-advertise data for a consumed nonce And nonces are retained in a local denylist for at least 15 minutes or until expiration + 5 minutes, whichever is longer
Offline Signature Verification
Given a scanned token When verified using the provisioned public key entirely offline Then a valid token verifies in ≤150 ms on a mid-tier device And any alteration to any covered field causes verification failure And tokens signed with unknown or revoked keys are rejected
BLE Handoff Bootstrap
Given a valid scanned token When the receiver initiates discovery using the token's service_id Then the MoveMate advertiser is discovered within 3 seconds at 1 meter line-of-sight And a secure session is established using the token's bootstrap key material And only the scoped session summary export is requested and transferred And no internet connectivity is required or attempted during the flow
Clock Drift Tolerance and Expiration
Given device clocks may differ by up to ±120 seconds When evaluating token freshness Then the token is accepted if now <= exp_timestamp + 120 seconds And the token is rejected if now > exp_timestamp + 120 seconds And the rejection reason indicates expiration
Scoped Access Enforcement
Given a valid token whose scope contains session IDs [S1, S2] When the receiver requests data Then only S1 and S2 session summaries are accessible And any attempt to access other sessions is denied with a specific error And transfer logs show only the scoped IDs
Local BLE Handoff Transfer
"As a clinician using a kiosk or phone, I want the app to automatically start a secure nearby transfer after scanning the patient’s QR so that I receive the summary in seconds without Wi‑Fi or cables."
Description

Implement a cross-platform BLE GATT service that transfers the packaged session summary offline after the receiver scans the QR. The sender advertises an ephemeral service derived from the token; the receiver authenticates using the token and establishes an encrypted session key. Support MTU negotiation, chunking, resume on disconnect, backoff and retry, and progress reporting. Operates within iOS and Android background and permission constraints, with battery- and timeouts-aware behavior. No internet connectivity is required at any point.

Acceptance Criteria
Ephemeral BLE Service Advertisement and Discovery
Given the sender has a packaged session summary and a freshly generated QR token When the sender initiates QR Handoff Then the sender advertises a BLE GATT service with a UUID deterministically derived from the token and unique per handoff And the advertisement exposes only the service UUID and a generic name (no PHI) And the receiver discovers the service within 5 seconds at 1 meter distance (RSSI ≥ -80 dBm) And advertising stops on completion or after 120 seconds of inactivity, whichever comes first
QR Token Authentication and Encrypted Session Establishment
Given the receiver has parsed the QR token offline When the receiver connects and presents the token on the Auth characteristic Then the sender validates token signature and expiry (≤ 10 minutes since issuance) and enforces single-use And both sides perform ECDH (Curve25519) to derive a session key and encrypt all payload with AES-256-GCM before any data chunks are sent And invalid or expired tokens are rejected within 1 second with errors AUTH_INVALID or AUTH_EXPIRED And the entire flow succeeds in airplane mode; no network requests are made
MTU Negotiation, Chunking, Resume, and Integrity
Given a BLE connection is established When MTU negotiation occurs Then transfer uses the negotiated MTU for chunk sizing (chunk_size = negotiated_MTU - 3) And payload is sent as monotonically increasing offset-tagged chunks with per-chunk ACKs And on disconnect mid-transfer, upon reconnect within 30 seconds using the same token, transfer resumes from the last acknowledged offset without re-sending acknowledged bytes And final HMAC-SHA256 over the received payload matches the sender’s value And with RSSI ≥ -70 dBm, end-to-end throughput on a 1 MB payload is ≥ 25 KB/s
Backoff, Retry, and Error Handling
Given a write/notify failure or ATT congestion occurs When retry logic is triggered Then retries use exponential backoff starting at 200 ms, doubling up to 3 s, with a maximum of 5 attempts per chunk And after exceeding retries, the transfer aborts with error XFER_RETRY_EXCEEDED and both sides clean up state And successful writes reset the backoff window And all retry counts and final status are logged with timestamps
Progress Reporting to Sender and Receiver
Given a transfer is in progress When bytes are acknowledged by the receiver Then both apps display progress percentage and bytes sent/total, updating at least every 1 second or on ≥ 5% change And estimated time remaining is shown once at least 10% of data has been transferred And on completion, both sides show "Transfer complete" with total bytes and duration And on failure, both sides show a human-readable cause and stable error code
Background Mode and Permissions Compliance (iOS and Android)
Given devices run iOS 16+ or Android 12+ and BLE permissions are granted When either app is backgrounded or the screen is locked during handoff Then the transfer continues: iOS uses CoreBluetooth background modes with state restoration; Android uses a Foreground Service with a persistent notification And if permissions are missing, the app prompts once with the OS dialog and does not start advertising or scanning until granted And if the OS kills the app, relaunch within 60 seconds with the same token restores state and resumes; otherwise it fails with SESSION_NOT_FOUND And no crashes or ANRs occur during a 10-minute soak test with 3 consecutive transfers
Timeouts and Battery-Aware Behavior
Given a transfer session is active When no control or data messages are exchanged for 30 seconds Then the session times out with XFER_TIMEOUT and both devices stop advertising/scanning And total transfer duration is capped at 3 minutes for payloads ≤ 10 MB; exceeding this aborts with XFER_TIME_EXCEEDED And when battery saver is ON or battery < 20%, advertising/scanning duty cycle is reduced by ≥ 50% while still allowing discovery within 10 seconds at 1 meter And after timeout or abort, all temporary keys and tokens are wiped from memory
Scoped Single‑Use Authorization & Expiry
"As a privacy‑conscious user, I want the QR handoff to work only once and expire quickly so that my data cannot be reused or intercepted."
Description

Enforce scope- and time-bound access for each handoff. Tokens are valid for a single transfer of the specified session(s), expire rapidly if unused, and are invalidated immediately after success. Implement replay protection, attempt limits, and local audit entries. Provide configurable expiry windows and clear user feedback when a code is expired or already used.

Acceptance Criteria
Single-Use Token Redemption and Immediate Invalidation
Given a valid handoff token T scoped to sessions [S1, S2] on the patient device When a clinician device completes a successful BLE transfer initiated via scanning the QR Then token T is marked used within 1 second on the patient device And any subsequent scan or BLE redemption attempt using T returns "Already used" and no scoped data is transferred And the patient UI shows "Transfer complete" with a timestamp And the clinician receives exactly one copy of the scoped payload
Scope Enforcement: Only Specified Sessions Transfer
Given handoff token T is scoped to session IDs [S1, S2] and excludes S3 When the clinician device redeems T Then the transferred payload contains only S1 and S2 data (rep totals, flagged clips, notes) And no data for S3 or any other sessions is included And the payload includes a signed scope manifest listing [S1, S2] that validates against T And any request parameter that attempts to include out-of-scope data is ignored
Expiry Window Enforcement and Clear Expired Messaging
Given the expiry window is configured to 90 seconds and token T is generated at time t0 When a clinician scans after t0 + 90 seconds Then redemption is rejected with "Code expired" on both devices and no data is transferred And T cannot be redeemed thereafter And when scanning at or before t0 + 90 seconds, redemption is allowed if all other conditions are met And the patient UI displays a countdown to expiry with accuracy of ±1 second
Replay Protection Across Devices and QR Rotations
Given token T is not yet used and the on-screen QR rotates every 10 seconds while encoding T When Device A successfully redeems T Then any later scan by Device B or re-scan by Device A fails with "Already used" and no BLE transfer starts And the BLE handshake uses a one-time nonce challenge bound to T so that captured QR images or intercepted BLE frames cannot be replayed to redeem T
Failed Attempt Limit and Temporary Lockout
Given the failed-attempt limit is configured to 3 within 5 minutes for token T When 3 redemption attempts for T fail within 5 minutes (e.g., invalid signature, scope mismatch, incompatible app version) Then T is locked until expiry and subsequent attempts return "Too many attempts" with no transfer And the patient UI offers a "Generate new code" action And an audit entry is recorded with reason attempt_limit_exceeded
Local Audit Trail for Issue, Redeem, Expire, and Fail Events
Given token T is generated, redeemed once, and later scanned again after expiry Then the patient device audit log contains entries for: issued, redeem_success, expired, and reuse_attempt_after_expiry And each entry records timestamp, truncated token ID, scope session count, clinician device alias (if available), outcome, and reason And no raw exercise data or video content is stored in audit entries And audit entries are viewable in-app under Settings > Privacy & Security > Local Audit
Session Summary Packaging & Size Management
"As a patient with many exercises and flagged clips, I want the handoff to include the essential highlights and still finish quickly so that my clinician gets actionable data without delays."
Description

Assemble an export bundle containing rep totals, flagged clip thumbnails or short low‑resolution segments, and clinician/patient notes. Apply compression and adaptive content scaling to meet target size and transfer time budgets. Use a versioned, documented schema with checksums for integrity and deterministic ordering for deduplication. Support queuing multiple sessions and selecting which to include before generating the token.

Acceptance Criteria
Bundle Composition and Required Fields
Given a completed session with exercises, rep totals, notes, and flagged clips When an export bundle is generated Then a single archive is produced containing a root file "manifest.json" And manifest.json includes: schemaVersion (semver), generatorVersion, bundleId (UUID), createdAt (UTC ISO-8601), sessions[] And each session includes: sessionId, startedAt, endedAt, repTotals per exercise, notes, mediaRefs[] And each mediaRef includes: id, type (thumbnail|segment|still), mimeType, width, height, durationMs (segments only), byteSize, checksumSha256, path And every mediaRef path resolves to an existing file in the archive And the sum of mediaRefs.byteSize equals the actual combined size of the corresponding files
Adaptive Size and Transfer Budget Compliance
Given packaging settings with maxBundleSizeBytes=5242880 and maxTransferSeconds=30 and assumedThroughputKbps=1000 When generating a bundle for selected sessions Then archive size <= maxBundleSizeBytes And estimatedTransferSeconds = ceil((archiveSizeBytes*8)/assumedThroughputKbps/1000) <= maxTransferSeconds And if the budget cannot be met after applying all downscaling and fallback rules, packaging aborts with error code "BUDGET_UNMET" and no token is generated
Deterministic Ordering and Byte-Stable Output
Given identical input data, packer version, and settings When packaging is executed twice Then the two output archives have identical SHA-256 digests And manifest arrays are sorted by session.startedAt asc, then exerciseId asc, then mediaRef.id asc And file timestamps, permissions, and archive metadata are normalized to deterministic values
Checksum Integrity and Corruption Handling
Given a generated bundle When a single byte of any embedded file is altered Then checksum verification fails and the importer rejects the bundle with error code "CHECKSUM_MISMATCH" And the importer reports the offending asset id and its expected vs actual checksum And when no bytes are altered, all asset and manifest checksums verify successfully
Multi-Session Queueing and Selection
Given a queue containing sessions [A,B,C] When the user selects B and C and initiates packaging Then the manifest.sessions contains only B and C in deterministic order And if adding an additional selected session would breach the size or time budget, the packer identifies the session causing the breach and prompts for deselection or applies media fallback rules And deselecting a session updates the estimated bundle size before final generation
Media Downscaling and Fallback Cascade
Given flagged clips for selected sessions When packaging applies media constraints Then video segments are encoded at <= 240p, <= 12 fps, <= 3 s per clip, and target bitrate <= 200 kbps And if the bundle exceeds budget, per-clip duration is reduced down to a minimum of 1 s before switching format And if still over budget, segments are replaced with thumbnails (max 320x320, JPEG quality 70) per flagged clip And if still over budget, include a single still frame per clip (max 256x256, JPEG quality 60) And repTotals and notes are never dropped And the manifest mediaRef.type accurately reflects the final form used (segment|thumbnail|still)
Cross-Session Asset Deduplication
Given two or more selected sessions that reference identical media assets (content-hash equal) When packaging the bundle Then only one physical copy of the asset is stored in the archive And all referring manifest entries point to the same path and checksum And the archive size equals the sum of unique asset sizes plus manifest and overhead And deduplication decisions are deterministic across runs
Receiver Verification & Import Workflow
"As a clinician, I want a clear import flow that verifies authenticity and shows exactly what will be added so that I can trust and use the data immediately."
Description

Provide a guided receiver flow that scans the QR, validates the signature offline, displays sender and session metadata, requests consent, initiates BLE, verifies checksums, and imports results into the clinician dashboard. Handle duplicates gracefully, show clear error states with retry options, and store imported data in an appropriate, scoped patient record without requiring internet access at the time of import.

Acceptance Criteria
Offline QR Scan and Signature Validation
Given the receiver device has no internet connectivity When the clinician scans a QR code generated by MoveMate Sender Then the app parses the payload and validates the digital signature offline against the embedded MoveMate public key And the code’s expiration timestamp is checked and unexpired codes proceed; expired or reused codes are rejected with a clear error and no data is stored And the UI advances to the metadata review step on successful validation
Session Metadata Review and Consent Prompt
Given a QR payload has been validated When the app displays the transfer details Then it shows sender identifier, patient identifier (scoped), session timestamp, exercise count, flagged clip count, and data size estimate And the clinician must explicitly choose Accept or Decline And Accept enables the transfer; Decline cancels and discards any temporary payload
BLE Transfer Initiation and Secure Channel
Given the clinician has accepted the transfer When the app initiates BLE discovery using the ephemeral session identifier from the QR Then it connects only to a sender advertising the expected session identifier within 10 seconds or times out with a retry prompt And the data channel is encrypted using a session key derived from the signed payload And a progress indicator shows bytes received and estimated time remaining
Checksum Verification and Import Integrity
Given BLE transfer is in progress or complete When each data chunk and the final manifest are received Then per‑chunk checksums and a manifest hash are verified locally And any checksum mismatch triggers up to 3 automatic retransmission attempts for the affected chunks And the import is marked successful only if all checks pass; otherwise a recoverable error is shown with options Retry or Cancel
Duplicate Session Handling and Idempotent Import
Given a session with a unique session UUID or content hash is being imported When a matching session already exists in the clinician dashboard Then the app does not create duplicate entries or double‑count reps And the UI displays "Already imported" with the original import timestamp and a link to view it And the workflow returns to the dashboard without error
Error States and Retry Options
Given any recoverable error occurs (invalid QR, expired QR, BLE timeout, checksum failure) When the error is triggered Then the app shows a specific error title and description, an error code, and contextual guidance And a single‑tap Retry attempts the last failed step without re‑scanning, and Cancel returns to the start state And no partial or corrupted data is saved to patient records until a successful import completes
Offline Storage to Scoped Patient Record
Given a transfer has passed integrity checks When results are imported Then the data is stored locally in the clinician's dashboard under the correct scoped patient record without requiring internet And if the patient record does not exist offline, a provisional scoped record is created and linked And the session appears in the patient's timeline within 5 seconds and is queued for later sync
Offline UX, Accessibility, and Fallbacks
"As a patient at the doorstep, I want a simple, readable code and clear offline fallback steps so that the handoff works even if Bluetooth is blocked or connectivity is poor."
Description

Design a handoff screen with a large, high‑contrast rotating QR, automatic brightness boost, screen‑awake mode, and simple instructions. Provide accessible labels, haptics, and localization. Implement an animated multi‑frame QR fallback for minimal payload transfer when BLE is unavailable, with visible progress and cancellation. Include permission preflight and troubleshooting tips entirely offline.

Acceptance Criteria
High-Contrast Rotating QR Display Offline
Given the user opens the Handoff screen without internet connectivity When the screen renders Then a QR code is displayed at >= 320 px on its shortest side or >= 40% of screen width (whichever is larger) And the measured contrast ratio between QR modules and background is >= 7:1 And the QR content rotates with a new token at least every 5 seconds And each token includes a creation timestamp and is rejected by the receiver if older than 60 seconds And an instruction block shows 2–4 concise steps totaling <= 240 characters in the current locale And no network requests are initiated while the screen is active
Automatic Brightness Boost and Screen-Awake Mode
Given the Handoff screen becomes active When device brightness is below 80% Then the app increases brightness to at least 90% within 500 ms (if permitted) And the device is kept awake (no auto-lock) while the Handoff screen is visible And on exit the prior brightness level is restored within 500 ms And if programmatic brightness change is blocked, a non-modal prompt instructs the user to increase brightness manually and remains accessible offline
Accessible Labels, Haptics, and Localization
Given a system screen reader is enabled (VoiceOver/TalkBack) When navigating the Handoff screen Then all actionable elements expose accessible names, roles, and hints in the current locale And focus order follows visual order and returns to the invoking element on dismiss And Dynamic Type up to 200% (or Android font scale 1.5) does not truncate critical content or cause overlap And haptic feedback triggers on transfer start, success, and error using platform-standard patterns and respects system haptics settings And all visible strings are localized for en, es, and fr; missing locales fall back to en-US with correct language tags
BLE Primary Transfer with Offline Fallback Selection
Given BLE is available and enabled When the user taps Start Transfer Then the app begins BLE advertising within 1 second using the QR Handoff service UUID And the session summary (rep totals, flags, notes) transfers over BLE without any internet calls And if BLE permission is denied or a connection is not established within 10 seconds, the app offers the animated QR fallback with a single tap And the user can manually choose the fallback at any time before or during BLE attempts
Animated Multi-Frame QR Transfer with Progress and Cancel
Given BLE is unavailable or declined and the user starts the fallback When QR frames are displayed to the receiver Then the sender shows a progress indicator with percent complete and frame X of N updated at least once per frame And an estimate of time remaining updates at least every 2 seconds And a Cancel button of at least 44×44 pt is visible and cancels within 500 ms, clearing temporary data And upon completion the receiver validates checksum and the sender shows a success state without user intervention And if any frames are missed, the sender automatically loops frames until completion or cancel And the fallback payload contains only session summary fields (no raw video) and is <= 64 KB after compression
Permission Preflight and Offline Troubleshooting
Given the user initiates handoff and required permissions are not yet granted When entering the flow offline Then the app displays a preflight explaining the need for Bluetooth and Camera with localized text And tapping Continue triggers system permission prompts in sequence (Bluetooth, then Camera) And if any permission is denied, an offline troubleshooting sheet appears with steps to enable in Settings and a Try Again action And the flow proceeds via BLE if both are granted, or via animated QR if only Camera is granted And all help content is available offline and uses <= 200 KB of on-device storage
Scoped, Signed, Single-Use Tokens with Expiry
Given a transfer token is generated for handoff When embedded in BLE packets or QR frames Then the token includes recipient scope (clinician_id or kiosk_id), nonce, issued_at, expires_at (<= 5 minutes), and an Ed25519 signature over a canonical payload And the receiver validates the signature offline and rejects expired, replayed, or scope-mismatched tokens And on success or cancel the sender deletes ephemeral tokens within 1 second and retains no tokens beyond 10 minutes in storage
Post‑Transfer Acknowledgment & Audit Telemetry
"As a clinic administrator, I want a verifiable record of what was transferred and when so that compliance checks and troubleshooting are straightforward even if we were offline at the time."
Description

After a successful handoff, show confirmation on both devices with a short receipt code. Record a local audit event containing timestamps, session IDs, and integrity hashes for later cloud sync. Mark shared sessions on the sender to prevent unintended re‑sharing, and allow clinician devices to configure short, privacy‑preserving local retention with automatic purge.

Acceptance Criteria
Dual-Device Transfer Confirmation with Receipt Code
Given sender and clinician devices are connected offline via QR/BLE and the payload has been fully received When transfer integrity is verified Then both devices display "Transfer complete" and the same 6-character alphanumeric receipt code within 1 second And the receipt code remains visible for at least 20 seconds or until dismissed And the receipt code is stored in the local audit record And no success UI is shown if integrity verification fails
Local Audit Event Recording with Timestamps and Hashes
Given a handoff completes successfully When the acknowledgment is shown Then the device writes an immutable local audit record containing: transfer_id (UUIDv4), device_role (sender|receiver), session_ids, start_timestamp (ISO 8601 UTC), end_timestamp (ISO 8601 UTC), payload_bytes, integrity_hash (SHA-256 hex), receipt_code, app_version And the audit record is durably persisted within 2 seconds of acknowledgment And audit records exclude raw media and full patient identifiers And on write failure, the app retries up to 3 times with exponential backoff; if all retries fail, the transfer is marked audit_pending and the user is non-blockingly notified
Prevent Re‑sharing of Previously Transferred Sessions
Given sessions S1..Sn were included in a successful handoff When the sender selects sessions for a new handoff Then previously shared sessions are excluded by default and displayed with a "Shared" badge And attempting to include a previously shared session requires an explicit per-session "Allow re-share" confirmation And the audit record captures re_share_overrides for any re-shared session
Clinician Device Retention Policy and Automatic Purge
Given the clinician device has a local retention policy set (default 24 hours; configurable range 1 hour–7 days) When a handoff is acknowledged Then stored handoff payloads and derived caches are encrypted at rest And upon retention expiry, all session payloads and clips are purged automatically within 5 minutes And only minimal audit metadata (timestamps, session_ids, hashes, receipt_code) remains accessible And attempting to access purged content yields an "Expired" state with no recoverable data And a manual "Purge now" action immediately removes all retained payloads
Signature and Integrity Verification Gates Acknowledgment
Given the receiver validates the payload signature and/or integrity hash before acknowledgment When verification fails Then the receiver displays "Integrity check failed" with no receipt code And no session data is imported into patient records And a failed audit record with outcome=failed and failure_reason=integrity_error is written When verification passes Then acknowledgment and the receipt code are shown and outcome=success is recorded
Deferred Cloud Sync of Audit Telemetry
Given the device is offline at the time of handoff When network connectivity is restored Then local audit records are queued and synced to the cloud within 60 seconds And only audit metadata (no media, no free-text notes) is transmitted And failed sync attempts are retried with exponential backoff for up to 24 hours; unsynced records remain locally with status=unsynced And a visible sync status is available in Settings > Handoff History

Vault Seal

Tamper-evident, encrypted-at-rest storage for offline captures with per-session hashing and delayed ledger merge. Any edits or deletions are flagged on sync for transparent audit trails. Builds payer and clinic trust that offline data is authentic and intact.

Requirements

Offline Encrypted Capture Storage
"As a clinic administrator, I want all offline captures to be encrypted at rest on patient devices so that loss or theft of a device does not expose PHI and our clinic remains compliant."
Description

Encrypt all offline capture artifacts (raw video frames, pose/keypoint data, rep counts, and session metadata) at rest using per‑session data encryption keys (DEKs) wrapped by a hardware‑backed key (iOS Secure Enclave / Android Keystore). Use AES‑256‑GCM with authenticated encryption, enforce file protection classes requiring device unlock, and perform atomic, crash‑safe writes. Keys are rotated per session, zeroized on logout/device removal, and guarded by jailbreak/root detection with optional secure‑wipe on compromise. Integrates with MoveMate’s capture pipeline and local datastore, remaining power/bandwidth efficient and compatible with background operation.

Acceptance Criteria
AES-256-GCM Encryption at Rest
Given offline capture artifacts (video frames, keypoints, rep counts, metadata) are produced When they are persisted to local storage Then each artifact is encrypted using AES-256-GCM with a 256-bit key and a unique 96-bit nonce per artifact And the Additional Authenticated Data includes session_id, artifact_type, and sequence_number And no plaintext artifacts or metadata are written to disk at any time, including temp/cache files And any modification to ciphertext or tag results in decryption failure and an integrity error is logged
Per-Session DEK Generation and Hardware Wrapping
Given a new capture session starts When the session key is created Then a unique 256-bit random DEK is generated for that session And the DEK is stored only as a blob wrapped by a hardware-backed, non-exportable key (Secure Enclave/Android Keystore) And the DEK is never exported or logged in plaintext and is zeroized from process memory within 100 ms after use And unwrapping the DEK requires device unlock per platform policy And on devices lacking hardware-backed keys, capture is blocked and an actionable error is shown
Device-Unlock File Protection Enforcement
Given encrypted artifacts exist on the device When the device is locked Then read access to artifacts and wrapped DEKs is denied until the device is unlocked And background tasks attempting reads while locked are deferred or fail without exposing plaintext And attempts to access artifacts via device backups or developer tools do not yield usable plaintext
Atomic, Crash-Safe Writes
Given the app is writing an encrypted artifact When the app is force-closed or the device loses power mid-write Then on next launch either the previous artifact remains intact or the new artifact is fully written and GCM-authenticated; no partial files exist And durable write (fsync) and atomic rename semantics are used And across 100 induced crash/power-loss tests, no recovered artifact fails GCM authentication
Per-Session Key Rotation and Zeroization on Logout/Removal
Given multiple capture sessions have occurred When a new session starts Then a fresh DEK is generated and used exclusively for that session And when the user logs out or removes the device from the account Then all wrapped DEKs and key handles are deleted from secure storage and in-memory keys are zeroized And subsequent decryption attempts for prior offline artifacts fail with key-not-found and no plaintext is exposed And a forensic scan finds no recoverable key material remnants
Compromise Detection and Optional Secure Wipe
Given jailbreak/root/debugger tamper is detected by runtime checks When the compromise signal is raised Then the app halts capture immediately and initiates secure wipe of offline ciphertext, wrapped DEKs, and indexes within 5 seconds And a user-visible warning is displayed without revealing PHI or key details And telemetry records the event without sensitive data And after wipe, listing or decrypting artifacts returns no results
Background-Compatible and Resource-Efficient Operation
Given a 10-minute capture that includes background/foreground transitions on a reference device When encryption and writes run under OS-sanctioned background execution (e.g., tasks/WorkManager) with no network connectivity Then average CPU usage attributable to encryption stays under 15% and battery drain under 5% over the interval And no ANRs, background execution limit violations, or write failures occur And no unintended network calls are made during offline encryption
Per‑Session Hashing & Chain‑of‑Custody
"As a payer auditor, I want each session to have a verifiable cryptographic hash and signature so that I can confirm the data has not been altered before claim adjudication."
Description

Generate a canonical, deterministic package of each session (payload + normalized metadata) and compute a cryptographic fingerprint (e.g., SHA‑256 or BLAKE3) plus a verifiable signature (HMAC or ECDSA). Maintain an intra‑session Merkle tree to support partial verification and include monotonic timestamps, device identifiers, and app version in the manifest. Persist the hash/signature alongside the encrypted payload and expose the manifest for later verification, export, and payer review, establishing chain‑of‑custody from capture through sync.

Acceptance Criteria
Canonical Package Generation On Session Save
Given a completed session with payload and metadata When the user saves the session locally Then a canonical, deterministic package is generated using the current canonicalization spec version And the metadata is normalized (UTC timestamps, sorted keys, trimmed fields) with monotonic event timestamps And device_id, app_version, and session_id are present in the manifest And re-generating the package from the same inputs yields a byte-identical package and identical hash
Hash and Signature Creation & Persistence
Given a canonical package is available When hashing is performed Then a hash is computed with the configured algorithm (SHA-256 or BLAKE3) and recorded in the manifest with algorithm identifier When signing is performed Then a signature (HMAC or ECDSA as configured) over the package hash is created using the device's protected key and stored alongside the encrypted payload And retrieving the session shows the same hash and signature bytes as originally written And any change to the payload or manifest causes signature/hash verification to fail
Intra-Session Merkle Tree Partial Verification
Given a session composed of N chunks/events When a Merkle tree is built over the chunk digests Then the Merkle root is recorded in the manifest and equals a recomputation from the chunks And an inclusion proof for any randomly selected chunk verifies against the stored root And altering any single chunk causes its inclusion proof to fail verification
Monotonic Timestamps and Device/App Metadata
Given session events are timestamped during capture When serializing to the manifest Then event timestamps are non-decreasing (monotonic) in UTC And the manifest includes device_id and app_version fields populated and validated against allowed formats And if device wall-clock moves backward during capture, the serialized timestamps remain non-decreasing
Manifest Export for External Verification
Given a stored session exists When an export is requested via API or UI Then the system returns a JSON manifest with schema version, hash algorithm, signature scheme, Merkle root, device_id, app_version, and content pointers And the exported manifest and encrypted payload can be used by a verifier to successfully validate hash, signature, and Merkle proofs without server access And export is rejected with a 4xx error if required fields are missing
Offline-to-Sync Chain-of-Custody Audit
Given a session was captured offline When the device reconnects and sync is initiated Then the client verifies the stored hash, signature, and Merkle root against the local package before upload And the server verifies the received package against the manifest and records a chain-of-custody audit entry with timestamp and result And if any verification fails, the sync is marked tampered with a reason code, and the record is not merged into the ledger
Sync Integrity Verification & Quarantine
"As a clinician, I want the system to automatically verify and quarantine altered sessions on sync so that my dashboards and treatment decisions are based on trustworthy data."
Description

On sync, validate session manifests by recomputing hashes/Merkle roots and verifying signatures. Flag any mismatches, clock anomalies, or manifest divergences as tamper‑suspect, automatically quarantine the affected sessions, exclude them from analytics/dashboards, and generate audit events. Notify assigned clinicians/admins with remediation actions (re‑capture request, justification note, or override with approver sign‑off). Maintain tolerant canonicalization to avoid false positives across app versions.

Acceptance Criteria
Integrity Pass on Sync
Given a locally captured session with a signed manifest compliant with the current schema and a stored clinic public key And the manifest canonicalizes to the stable canonical form When the device syncs Then the system recomputes chunk hashes and the session Merkle root And verifies the signature against the clinic public key And validates timestamps within ±5 minutes of server time And sets the session integrity status to "verified" And includes the session in analytics and dashboards on the next aggregation cycle (<5 minutes) And records an "integrity_verified" audit event with session_id, merkle_root, signature_fingerprint, and verification_time
Quarantine on Hash/Signature Mismatch
Given a session manifest where any recomputed hash or the Merkle root does not match the stored value Or the signature fails verification against the clinic public key When the device syncs Then the system sets session integrity status to "tamper_suspect" And quarantines the session And excludes the session from analytics, dashboards, and exports immediately And records an "integrity_failure" audit event with reason_code ("hash_mismatch" or "signature_invalid"), expected_vs_actual digests, and detection_time
Quarantine on Clock Anomaly
Given a session whose capture timestamps deviate from server time by more than a configurable threshold (default ±5 minutes) Or contain non-monotonic or impossible sequences (e.g., end_time < start_time) When the device syncs Then the system sets session integrity status to "tamper_suspect" And quarantines the session with reason_code "clock_anomaly" And records an audit event with device_time, server_time, drift_seconds, and threshold_seconds
Quarantine on Manifest Divergence
Given the server holds a prior baseline manifest for the same session_id And the incoming manifest differs outside the allowed fields list (client_version, cosmetic metadata order, locale-only changes) When the device syncs Then the system quarantines the session with reason_code "manifest_divergence" And records an audit event including a structured field-level diff, previous_merkle_root, new_merkle_root, and divergence_time
Analytics/Dashboard Exclusion for Quarantined Sessions
Given a session is in quarantined state When clinic analytics and dashboards are generated or refreshed Then the session contributes zero to adherence, rep totals, and time-on-task metrics And the session does not appear in clinician or payer exports And the session is visible only in the quarantine queue with status "tamper_suspect" And upon resolution to "verified" via approved override, the session is re-included in metrics within 5 minutes and a "quarantine_resolved" audit event is recorded
Notification and Remediation Workflow
Given a session transitions to quarantined state When the quarantine is recorded Then the assigned clinician and clinic admin are notified in-app immediately and by email within 2 minutes And the notification includes session_id, patient_id, reason_code, impact summary, and available actions And available actions include: Request re-capture to the patient, Add justification note, Submit override request And an override requires approver sign-off by a user with admin role and successful 2FA And each action creates an audit event with actor_id, action_type, timestamp, and outcome
Cross-Version Canonicalization Tolerance
Given two manifests for equivalent sessions produced by different app versions (e.g., key order differences, whitespace, line endings, timezone offsets representing the same instants, float values equal within 0.0001 for non-critical fields) When canonicalization is applied and hashes are recomputed Then the Merkle root and signature verification pass And the session is not quarantined And a "cross_version_canonicalization_ok" audit event is recorded for telemetry
Delayed Ledger Merge & Conflict Resolution
"As a mobile user working offline, I want my session data and edits to merge reliably later so that nothing is lost and the audit history remains consistent across devices."
Description

Maintain an append‑only local ledger of capture and edit events that merges with the server when connectivity resumes. Use idempotency keys and vector clocks/Lamport timestamps to deterministically order events and resolve conflicts while preserving full lineage. Support multi‑device scenarios, backoff/retry, and partial merges. Ensure that audit integrity is preserved during merge, with both sides producing the same post‑merge state and a clear provenance history.

Acceptance Criteria
Append-Only Local Ledger with Hash Chain
Given the device is offline and the user records captures and edits When events are written to the local ledger Then each event is appended without mutating prior records And each event includes an idempotency_key unique to the operation And each event includes a vector_clock and lamport_timestamp And each event stores content_hash and prev_hash forming a valid hash chain per session And any attempt to alter a historical event is rejected and logged as tamper_detected
Deterministic Online Merge Yields Convergent State
Given the client holds N unsynced events and the server holds M related events When connectivity resumes and a merge is initiated Then client and server compute the same total order using vector clocks/lamport with device_id as deterministic tie-breaker And applying the ordered events yields identical post-merge state hashes on client and server And both sides persist the same merge_commit_id and terminal session hash And for N+M <= 10000, merge completes within 30 seconds
Conflict Resolution Across Multiple Devices
Given two devices edit the same record field without knowledge of each other When their event streams are merged Then conflicts are resolved deterministically by lamport_timestamp then device_id tie-breaker And the losing change is retained in lineage with conflict=true and winner_event_id reference And no duplicate field updates are applied And the final field value equals the winning event value on both client and server
Partial Merge Resume with Backoff and Retry
Given a network interruption occurs mid-merge after the server acknowledges K events When the client retries Then the client resumes from event K+1 without resending acknowledged events And retries use exponential backoff starting at 2s, doubling up to 60s max with 0.5x-1.5x jitter And after 3 consecutive failures, an error is surfaced while background retries continue And eventual success produces a single merge record without duplicate applications
Idempotent Replays and Duplicate Suppression
Given duplicate submissions occur for the same idempotency_key due to retries or multi-path When the server receives these events Then only the first event is stored and applied And subsequent duplicates return 200 with duplicate_of referencing the original event_id And the client treats the duplicate response as success and does not create new lineage nodes And system metrics record the deduplicated count for monitoring
Audit Trail and Provenance Integrity Post-Merge
Given a session includes offline edits and deletions When the merge completes Then the audit log shows full ordered lineage with actor, device_id, wall_time, lamport_timestamp, and reason And deletions are represented as tombstone events with provenance, not hard deletes And the session hash chain verifies root-to-tip without gaps And client and server compute identical audit_digest (SHA-256) for the merged session And any mismatch triggers rollback of the merge and marks the session audit_mismatch
Edit/Delete Diff & Immutable Audit Log
"As a compliance officer, I want an immutable audit trail with clear diffs of any edits or deletions so that I can demonstrate data integrity during internal and external audits."
Description

Record every edit and deletion as immutable, append‑only audit entries capturing who, when, where (device), why (reason code), and precisely what changed (field‑level before/after diff). Use WORM/retention‑locked storage on the server, persist tombstones for deletions, and surface human‑readable diff summaries within patient session views. Provide filters by user, date range, patient, and session to streamline clinic and payer audits.

Acceptance Criteria
Edit Event Logged with Full Metadata and Field-Level Diff
Given a clinician updates a patient session field and taps Save (online or offline) When the update is committed Then an append-only audit entry is created containing user ID and display name, ISO 8601 UTC timestamp, device ID and device type, patient ID, session ID, reason code (from configured list) or "Other" with a free-text reason ≥ 10 characters, and a field-level before/after diff listing field name, previous value, and new value And the audit entry includes a cryptographic hash of its payload and a unique immutable ID And if offline, the entry is queued locally for sync without loss
Deletion Creates Tombstone and Preserves History
Given a user deletes a patient session record or field that supports deletion When the delete is confirmed with a reason code Then the target data is marked as deleted via a tombstone without physical removal from the audit store And a tombstone audit entry is appended capturing who, when, device, reason code, identifiers of the deleted entity, and a before snapshot And subsequent reads exclude the deleted record by default but return it when "include deleted" is enabled
WORM/Retention Lock Prevents Mutation of Audit Entries
Given any actor or process attempts to modify or delete an existing audit entry within the configured retention period When the operation is executed via API, database, or admin tools Then the system rejects the operation with HTTP 403 (or equivalent) and logs a security event with actor, time, and method And the original audit entry remains byte-identical (checksum/hash unchanged) And only appending a new corrective audit entry that references the original ID is permitted
Human-Readable Diff Shown in Patient Session View
Given a patient session with one or more audit entries When a reviewer opens the session’s Audit tab Then each entry displays a human-readable summary showing changed fields with before → after values, editor, timestamp (local display with UTC on hover), device, and reason code And entries are ordered newest first and time-zone consistent And multi-field edits are grouped within the same entry; unchanged fields are not shown
Audit Log Filtering by User, Date Range, Patient, and Session
Given audit entries exist across multiple users, patients, sessions, and dates When a reviewer applies any combination of user, date range (inclusive start/end, UTC-normalized), patient, and session filters Then only entries matching all selected filters are returned And no entries outside the filters appear in results And result counts and pagination remain consistent when filters are applied or cleared within 200 ms for up to 10,000 entries
Reason Code Selection Required for Edits and Deletes
Given a user performs an edit or delete When the user submits the change Then a reason code must be selected from a configurable list; if "Other" is chosen, a free-text justification (≥ 10 characters, ≤ 500) is required And submission is blocked with inline validation if the requirement is not met And the chosen code and text are persisted in the audit entry and displayed in summaries
Role‑Based Audit Access & Export
"As a billing specialist, I want to export signed audit reports for selected patients or date ranges so that I can submit claims with evidence of data authenticity."
Description

Implement role‑based access controls for audit data (e.g., Clinician, Clinic Admin, Compliance Officer, Payer Viewer) with least‑privilege defaults and PII‑redaction options. Enable export of audit bundles (JSON manifest + CSV summary + signed PDF) that include hashes, signatures, timestamps, and responsible users, digitally signed by the server certificate. Log all access/export events and watermark exports for traceability.

Acceptance Criteria
RBAC: View and Export Permissions by Role
Given I am authenticated as a Clinician assigned to Patient A, When I open the Audit Log, Then I can view only entries related to Patient A and their sessions, And I cannot view audit entries for patients not assigned to me, And the Export action is not visible and API export endpoints return 403. Given I am authenticated as a Clinic Admin, When I open the Audit Log, Then I can view all audit entries for my clinic, And I can export audit bundles for selected date ranges or patients, And API and UI exports both succeed with HTTP 200 and valid downloads. Given I am authenticated as a Compliance Officer, When I open the Audit Log, Then I can view all clinic audit entries and access logs, And I can export audit bundles for any scope, including full-clinic, with optional redaction settings. Given I am authenticated as a Payer Viewer with access to Case X, When I access the Audit area, Then I cannot view raw audit logs, And I can only download audit bundles explicitly shared for Case X with redaction enforced, And attempts to access other cases or raw logs return 403.
Least-Privilege Defaults and Enforcement
Given a newly created user for each role, When no explicit permissions beyond their role are granted, Then only the minimum actions defined by that role are permitted, And all other audit access and export attempts return 403 and are logged. Given any user attempts to call an audit export API without export permission, When using valid authentication, Then the response is 403 with error code AUDIT_EXPORT_FORBIDDEN, And an access-denied event is logged with actor_user_id and actor_role. Given role permissions are updated by a Clinic Admin, When the change is saved, Then the new permissions take effect on the next request without requiring restart, And a permission-change event is logged with old_permissions, new_permissions, actor_user_id, and timestamp_utc.
PII Redaction Controls on Export
Given I am exporting an audit bundle, When I choose Redaction ON, Then the bundle masks PII fields (patient_name, email, phone, DOB, address, MRN) with consistent pseudonymous tokens, And the manifest records redaction=true and policy_version, And no PII appears in CSV, PDF, or manifest. Given I am a Payer Viewer, When I download any audit bundle, Then redaction is always enforced and cannot be disabled, And attempts to request unredacted exports return 403 and are logged. Given I am a Compliance Officer, When I choose Redaction OFF, Then I must enter a justification of at least 15 characters, And the justification is stored in the manifest and access log.
Export Bundle Composition and Digital Signature
Given an export is generated, When the download completes, Then the bundle is a single ZIP containing: manifest.json (JSON manifest), summary.csv (CSV summary), and report.pdf (signed PDF), And the manifest lists per-record SHA-256 hashes, exporter user_id, role, scope, and UTC ISO-8601 timestamps. Given the bundle signature is verified, When using the current server public certificate, Then signature validation succeeds for the ZIP content, And any modification to any file in the ZIP causes signature verification to fail. Given the signed PDF is opened, When inspecting its signature and metadata, Then it shows signer CN matching the server certificate and includes export_id and bundle_hash.
Access and Export Event Logging
Given any user views audit data or generates/downloads an export, When the action occurs, Then an audit event is stored with fields: event_id, actor_user_id, actor_role, action (view/export/download/denied), resource_scope, timestamp_utc, ip, device_id, and outcome (success/denied), And the event is immutable and queryable by authorized roles. Given the device is offline, When an access or export action occurs, Then the event is queued locally and synced on connectivity, And upon sync the original event timestamp_utc and device_id are preserved, And server_received_at_utc is added. Given a Compliance Officer filters the Access Log by user or date range, When the query is executed, Then only matching events are returned and fields match stored values exactly.
Export Watermark and Traceability
Given a PDF export is generated, When opened, Then a visible diagonal watermark appears on each page with export_id, requester_user_id, requester_role, generated_at_utc, and the text "MoveMate Vault Seal", And no patient PII is present in the watermark. Given a CSV export is generated, When opened, Then the first two lines contain a header watermark with the same fields, And CSV data rows remain unchanged. Given a watermarked export file is altered, When signature verification is performed, Then validation fails, preventing the altered export from passing integrity checks.
Offline Tamper Flags Included in Export
Given the audit bundle includes sessions captured offline, When the bundle is generated, Then the manifest and CSV include per-session client_hash, server_hash, tamper_flag (none/edited/deleted), device_id, and sync_timestamp_utc, And any mismatch between client_hash and server_hash sets tamper_flag accordingly. Given an offline edit or deletion occurred before sync, When exporting after sync, Then the affected audit rows are marked with tamper_flag ≠ none and include responsible user_id, And the signed PDF contains a Tamper Summary listing all flagged items.

Split Randomizer

One-tap, clinic-safe randomization that balances A and B arms by diagnosis, phase (post‑op vs chronic), severity, language, and visit cadence. Uses blocked and stratified randomization to prevent drift and auto-attaches assignments to each patient’s SnapCode so onboarding stays under 60 seconds. Cuts setup time, removes selection bias, and gives every clinician confidence that results are fair and comparable.

Requirements

Stratification Profile Configuration
"As a clinic admin, I want to define and manage the stratification criteria for A/B assignments so that randomization remains balanced and relevant to our patient population and workflows."
Description

Provide a configurable schema for randomization strata including diagnosis categories, phase (post‑op vs chronic), severity scale, language, and visit cadence. Support clinic-level defaults and templates, field mappings to existing patient data, validation rules, and safe fallbacks when attributes are missing. Enable versioned profiles with change history, preview of expected allocation counts, and backward compatibility for existing studies. Integrate with EMR/app data sources via API to auto-populate strata where available, with manual override permissions based on role. Ensure localization for language labels and consistent coding for analysis.

Acceptance Criteria
Clinic Default and Template-Based Stratification Profile
Given a clinic admin with configuration permissions When they create or edit a stratification profile Then they can add and reorder strata for diagnosis categories, phase (post‑op|chronic), severity scale (integer range configurable), language, and visit cadence (discrete categories) And they can save the profile as a named template with a semantic version (e.g., 1.2.0) And they can designate one template as the clinic default And when starting a new study, the clinic default is auto-applied, with the template name and version visibly confirmed before activation And attempting to save a profile without at least one stratum and two arms is blocked with a specific validation error
EMR/App Field Mapping and Auto-Population
Given field mappings are configured between profile strata and EMR/app data fields When a patient is loaded or refreshed via API Then mapped values auto-populate the corresponding strata fields without manual input And a mapping coverage indicator shows the percent of patients with complete auto-population per stratum And each auto-population event records source system, field, value, and timestamp in the audit log And if an API call fails or returns empty, the UI clearly indicates unmapped fields and leaves them editable (subject to role)
Validation Rules and Safe Fallbacks on Missing Attributes
Given a profile with validation rules for each stratum When a user attempts to save patient strata values Then the system enforces rules: severity must be within the configured range; language must be a valid code from the configured set; diagnosis and cadence must match allowed categories; phase must be one of {post‑op, chronic} And invalid entries block save with field-specific error messages And when randomization is triggered with missing/invalid attributes Then the system applies safe fallbacks using configured default category codes (e.g., diagnosis=UNK, language=und, severity=NA) And the fallback application is non-blocking, visibly flagged, and recorded in the audit log
Versioned Profiles with Change History and Backward Compatibility
Given an active study pinned to profile version v1.x When an admin publishes a new profile version v2.0 Then existing active studies remain pinned to v1.x with no change in randomization behavior And the change history captures who, when, what fields/values changed, and the diff between versions And users can preview diffs and clone v1.x to start v2.0 without editing the original And exports include the profile version used per patient for analysis reproducibility
Allocation Preview for Blocked and Stratified Randomization
Given a selected profile, a chosen block size per stratum, and expected enrollments per stratum When a user runs the allocation preview Then the system displays expected A/B counts per stratum and overall totals And indicates that within any fully completed block the arm imbalance is 0 And for incomplete final blocks, the maximum temporary imbalance per stratum is ≤ block_size/2 and is explicitly shown And the preview can be exported to CSV and reflects current profile version and block parameters
Role-Based Manual Overrides with Audit Trail
Given role permissions where Admin and Clinician may override and Staff may not When a permitted user manually overrides an auto-populated stratum before randomization Then the system requires a reason, captures old value, new value, data source, user, and timestamp And overrides after randomization are blocked unless elevated Admin override is used, which is separately logged and does not retroactively alter existing assignments And attempts by unauthorized roles are denied with an access error
Localization of Language Labels with Stable Analysis Codes
Given the clinic locale and a configured language code set (e.g., ISO 639‑1 or clinic-defined mapping) When users view or edit language strata Then UI labels appear in the clinic locale while stored values remain the stable analysis codes And if a translation is missing, the system falls back to the default (English) label without changing the stored code And data exports include the stable code and a canonical label column to ensure consistent downstream analysis
Blocked Randomization Engine
"As a clinician, I want reliable one-tap randomization that fairly assigns patients within their stratum so that I can trust the results are unbiased and consistent across the clinic."
Description

Implement a stratified, permuted-block randomization service that balances A and B arms within each stratum and prevents allocation drift. Support variable block sizes, cryptographically secure seeding, allocation concealment until commit, idempotent assignment on retries, and high-concurrency safety. Store assignment decisions with stratum snapshot and engine version for reproducibility. Allow k-arm extensibility beyond A/B and configurable allocation ratios. Provide guardrails for minimum stratum size, handling of missing attributes, and deterministic replay for audits. Include unit/integration tests, performance SLAs (<150 ms P95), and observability (metrics/traces).

Acceptance Criteria
Balanced A/B within Stratum using Permuted Blocks
Given a stratum defined by diagnosis, phase, severity, language, and visit cadence with 1:1 allocation and block sizes [4,6,8] When 1,000 patients are sequentially randomized into this stratum Then at each completed block boundary cumulative A and B counts are equal And after all assignments |A-B| = 0 within the stratum And no block violates its per-arm counts for the configured ratio
Variable Block Sizes, Drift Prevention, and Guardrails under Concurrency
Given variable block sizes [4,6,8] and 200 concurrent enrollment requests targeting the same stratum When the engine processes these requests Then no two requests consume the same block position And no block is overfilled And within each in-progress block, per-arm assignment count never exceeds its per-block quota And if required stratum attributes are missing, the request is routed to the configured fallback stratum or rejected with 422 per policy, and no assignment is leaked And if projected stratum size is below the configured minimum, the engine auto-buckets to the next coarser stratum per policy and records the fallback in the assignment metadata
Allocation Concealment Until Transaction Commit
Given a pre-commit request without a durable patient assignment When calling the randomization API Then the response does not reveal the next arm, block ID, or sequence position And only after a successful commit with a unique patient key is acknowledged, the assigned arm is returned And logs and traces redact any future arm or sequence information
Idempotent Assignment on Retries and Network Failures
Given a patientId and stratum snapshot S When the same randomization request is retried N times due to timeouts or conflicts Then all successful responses return the same arm and assignmentId And exactly one persisted record exists for that patientId within S And duplicate retries return 200 with the original payload within 150 ms P95
k-Arm Extensibility with Custom Allocation Ratios
Given a 3-arm configuration [ArmA, ArmB, ArmC] with ratio 2:1:1 and block sizes [8,12] When 2,400 patients are randomized within one stratum Then every completed block contains [4,2,2] or [6,3,3] per arm respectively And cumulative proportions across all assignments are within ±1% of 50%, 25%, 25% And configurations whose block sizes are not divisible by the ratio sum are rejected with HTTP 400 and validation details
Persistent Audit Trail with Stratum Snapshot and Deterministic Replay
Given an assignment is made When inspecting persistence Then the record includes patientId, assignedArm, timestamp, stratum attributes, stratum snapshot hash, allocation ratio, block size, block id, engine version, RNG metadata hash, requestId, and operator/clinic identifiers And when running the engine in replay mode with the stored snapshot, engine version, and RNG metadata, the historical sequence is reproduced exactly without mutating production state And the entropy source is a cryptographically secure RNG and seeds are not reused across blocks
Performance P95 <150 ms with Metrics, Traces, and Tests
Given steady-state load of 200 RPS with 50% targeting the same stratum When measuring end-to-end latency over a 10-minute window Then P95 latency for POST /randomize is < 150 ms and P99 < 300 ms with error rate (5xx) < 0.1% And metrics are emitted: randomizations_total{arm,stratum}, randomization_latency_ms, randomization_errors_total{code}, block_imbalance_gauge{stratum}, rng_entropy_bytes_total And traces include: assignmentId, stratum snapshot hash, engine version, decision duration, and lock-wait timing with PII redacted And CI reports ≥90% statement coverage for the engine module and all integration tests against persistence and concurrency harness pass
One‑Tap Randomize UI
"As a therapist onboarding a patient, I want to randomize with a single tap and see the assignment immediately so that I can finish setup within a minute and move on to care."
Description

Deliver a single-action UI that displays the patient’s stratum summary, confirms eligibility, and performs randomization with one tap. Show clear loading/committed states, the assigned arm, and next steps. Provide guardrails (e.g., duplicate prevention, offline/poor connectivity handling with queued requests), accessible design (WCAG AA), and concise microcopy for clinic safety. Support <60-second onboarding by minimizing required inputs and auto-filling attributes from patient records. Include error states with retry and contact support paths.

Acceptance Criteria
Stratum Summary and Eligibility on Load
Given a patient record with diagnosis, phase (post‑op or chronic), severity, language, and visit cadence exists, When the One‑Tap Randomize screen opens, Then those attributes are auto‑filled into a stratum summary card and the eligibility status is displayed as Eligible or Ineligible with at least one human‑readable reason if Ineligible. Given one or more required stratum attributes are missing, When the screen opens, Then the UI clearly indicates the missing fields and offers autofill suggestions from the patient record without blocking the screen.
One‑Tap Randomization Execution
Given an eligible patient with all stratum attributes resolved, When the clinician taps the Randomize button once, Then exactly one server request is sent containing the patient ID and stratum, And the assignment is generated using blocked and stratified randomization across diagnosis, phase, severity, language, and visit cadence, And the server returns Arm A or Arm B within 2 seconds on a good connection. Given a successful response, When the assignment is received, Then the assignment is persisted to the patient record, is immutable without admin override, and the operation is idempotent based on patient ID (retries or duplicate taps do not create new assignments).
Loading, Committed, and Next Steps UI
Given a randomization request is pending, When the clinician initiates randomization, Then the UI shows a loading state with spinner and text “Randomizing…” and disables the Randomize button until completion. Given the assignment is committed, When the server confirms, Then the UI displays “Assigned: Arm A|B,” auto‑attaches the assignment to the patient’s SnapCode, and shows concise next‑step instructions, And a screen‑reader announcement is triggered for the committed state.
Offline and Poor Connectivity Queueing
Given the device is offline or the request times out, When the clinician taps Randomize, Then the app enqueues a single randomization operation with a unique ID, shows a Queued state with last sync time, disables further randomization, and provides a Retry Now control. Given connectivity is restored, When the queue flushes, Then the first successful server response sets the assignment, conflicts are resolved by honoring the first committed assignment on the server, the UI reflects the final assignment, and the queued state clears.
Accessibility and Clinic‑Safe Microcopy
Given the One‑Tap Randomize screen, When evaluated against WCAG 2.2 AA, Then all interactive elements have accessible names/roles/states, focus order matches visual order, live regions announce loading and committed states, and keyboard navigation has no traps. Given the UI visual design, When measured, Then text contrast is ≥4.5:1 and touch targets are ≥44×44 points, And microcopy is plain‑language, unambiguous, and avoids unexplained abbreviations or jargon.
Onboarding Time and Minimal Inputs
Given a typical patient with complete records, When starting at the One‑Tap Randomize screen, Then the clinician can complete randomization in under 60 seconds without manual input (p50 ≤ 30s, p90 ≤ 60s over 20 test runs). Given one or more stratum attributes are missing, When prompted, Then the UI requires at most one manual input to establish eligibility and still enables completion in under 60 seconds (p50 ≤ 45s, p90 ≤ 60s).
Error States, Retry, and Support Path
Given a server or validation error occurs (4xx/5xx), When randomization fails, Then the UI displays an error banner with human‑readable summary and error code, provides a Retry action, and shows a Contact Support link prefilled with patient ID, stratum, timestamp, and request ID. Given repeated failures (≥3 retries or ≥3 minutes elapsed), When the user remains on the screen, Then the UI offers Save and Exit while preserving state and queue, and records the incident in audit logs.
SnapCode Auto‑Attach & Sync
"As a clinician, I want each patient’s assignment to be automatically linked to their SnapCode so that onboarding remains fast and error-free across our devices and apps."
Description

Automatically bind the assignment to the patient’s SnapCode upon commit, ensuring assignment persistence across devices and sessions. Handle pre-existing SnapCodes, conflicts, and re-attachment idempotently. Encrypt identifiers at rest and in transit, and sync to patient profile, exercise plan templates, and reporting. Define rules for attribute changes post-assignment (freeze by default; admin override with audit note). Support offline capture with background sync and reconciliation, and emit events to trigger onboarding flows and nudges.

Acceptance Criteria
Auto-Attach on Commit Across Devices & Sessions
Given a patient with a generated SnapCode and a pending A/B assignment When the clinician taps Commit Then the system binds the assignment to the SnapCode and persists the mapping And the binding is immediately visible on the patient profile And the same binding appears when the clinician signs in on a different device under the same clinic account And the binding remains after app restart and session expiry And an audit log entry is created with timestamp, clinician ID, and a hashed SnapCode identifier
Idempotent Re-Attach and Conflict Handling
Given a previous successful binding exists for SnapCode S and assignment A When the commit operation for SnapCode S and assignment A is retried due to network issues Then the system returns success without creating a duplicate record And the binding version and checksum remain unchanged Given SnapCode S is already bound to assignment A1 When a new commit attempts to bind SnapCode S to a different assignment A2 Then the system rejects the operation with a conflict response And the response exposes the existing binding metadata And an admin override pathway is offered
Encryption of Identifiers at Rest and In Transit
Given any storage of SnapCode, patient identifier, or assignment mapping Then data at rest is encrypted using AES‑256 or stronger And encryption keys are managed via KMS/HSM with rotation at least every 90 days Given API requests or event transmissions containing these identifiers When data is transmitted Then TLS 1.2+ is enforced and plaintext transmission is blocked And failed certificate validation blocks the request and raises an alert
Multi-Target Sync to Profile, Templates, and Reporting
Given a successful SnapCode-to-assignment binding When the sync process runs Then the patient profile displays the bound assignment And the current exercise plan template references the correct randomized arm And the reporting store receives an immutable AssignmentBound record within 60 seconds And repeated syncs do not create duplicates
Post-Assignment Attribute Freeze with Admin Override and Audit
Given an assignment has been bound to a SnapCode Then diagnosis, phase, severity, language, and visit cadence used for stratification are frozen When a clinician attempts to change any frozen attribute Then the change is blocked with a clear message indicating the freeze policy When an admin performs an override providing a reason Then the change is applied And an audit note records admin ID, timestamp, original values, new values, and reason
Offline Commit with Background Sync and Deterministic Reconciliation
Given the device is offline at the time of commit When the clinician taps Commit Then the binding request is queued locally in encrypted form And the UI displays a Pending Sync state When connectivity is restored Then the queued binding is transmitted automatically in the background And the server acknowledges with the final binding state If a conflict exists on sync Then reconciliation chooses the earliest commit timestamp deterministically And the clinician receives a notification of the outcome
Event Emission to Trigger Onboarding and Nudges
Given a successful assignment binding When the commit is acknowledged by the server Then an AssignmentBound event is emitted with patientId (hashed), snapCode hash, assignment arm, timestamp, and source device ID And downstream services trigger onboarding within 1 minute of event emission And delivery is retried with exponential backoff for at least 24 hours upon failure And events failing after retries are placed on a dead‑letter queue for manual review
Audit Trail & Compliance Export
"As a research coordinator, I want a complete audit trail and exportable records of randomizations so that we can document methodology and satisfy compliance reviews."
Description

Maintain an immutable, queryable log of all randomization events including user, timestamp, patient anonymized ID, stratum snapshot, block metadata, engine seed/version, and outcome. Provide role-based access controls, PHI minimization/redaction in exports, and configurable retention. Enable one-click CSV/PDF exports with clinic branding and a reproducibility appendix (method, parameters, checksum). Surface tamper-evident signatures and time sync validation to meet clinic governance requirements.

Acceptance Criteria
Immutable Randomization Event Logging
Given a clinician completes a randomization When the assignment is committed Then an audit event is appended with fields: EventID, TimestampUTC (ISO 8601 Z), UserID, PatientAnonID, StratumSnapshot (diagnosis, phase, severity, language, visitCadence), BlockMetadata, EngineSeed, EngineVersion, Outcome, RequestID Given an audit event exists When any user attempts to modify or delete it via UI or API Then the system rejects the operation with 403/405 and records a security audit entry; no mutation occurs Given network retries occur When the same RequestID is replayed Then only one event exists and subsequent duplicates return 409 without creating new entries Given a new event is recorded When validation runs Then all required fields are non-null and pass schema validation
Tamper-Evident Chain and Time Sync Validation
Given the audit log has a latest entry When a new entry is appended Then the entry includes PrevHash and EntryHash (SHA-256 over canonical content + PrevHash) and an EntrySignature verifiable by the platform public key Given any entry in the chain is altered When chain verification runs Then verification fails and a chain-break alert is raised; exports are disabled until integrity is restored Given NTP time offset is monitored When offset exceeds the configured threshold (default 5s) Then TimeSyncStatus=OutOfSync is recorded, new audit writes are blocked, and an admin banner is displayed Given an event is recorded When reviewing metadata Then TimestampUTC is sourced from synchronized clock and MonotonicSequence increments by 1 with no gaps
Role-Based Access and Least-Privilege
Given a user with role Clinician in a tenant When accessing the audit log Then they can view entries scoped to their tenant but cannot export or view signatures/private key material (403) Given a user with role Auditor in a tenant When accessing the audit log Then they can view and export entries scoped to their tenant and verify signatures Given a user with role Admin in a tenant When accessing settings Then they can configure retention and RBAC and view/export entries scoped to their tenant Given a user with role Support When requesting audit content Then access is denied (403); only system health metadata is visible without PHI Given an unauthorized request to view or export When processed Then a 403 is returned and the attempt is logged without exposing PHI
PHI Minimization and Redaction in Exports
Given an authorized export request When CSV and PDF are generated Then only whitelisted fields are included: EventID, TimestampUTC, PatientAnonID, UserRole, StratumSnapshot, BlockMetadata, EngineSeed, EngineVersion, Outcome, EntryHash, PrevHash, EntrySignature, TimeSyncStatus, MonotonicSequence, ExportMetadata; no names, DOB, MRN, addresses, phone, or email appear Given audit data contains free-text or optional notes When exporting Then such fields are excluded from export by design Given a user attempts to include PHI via query parameters When export runs Then PHI fields remain excluded and the export spec is enforced
One-Click CSV/PDF Export with Branding and Reproducibility Appendix
Given an auditor selects a date range and clicks Export Audit Trail When the export job runs Then a ZIP containing CSV and PDF is produced within 10 seconds for up to 100k events and is named <clinicSlug>_audit_<UTC-timestamp>.zip Given the PDF export When opened Then the cover includes clinic logo/name, selected filters, generated-at TimestampUTC, and page numbers Given the reproducibility appendix section in the PDF When reviewed Then it lists method (blocked/stratified), parameter values (block sizes, strata definitions), engine version, engine seed, PRNG algorithm, config checksums (SHA-256), and an overall export checksum with verification steps Given the CSV export When validated Then headers and order match the export spec v1.0 and the file is RFC 4180 compliant, UTF-8 without BOM
Configurable Retention and Purging with Evidence Preservation
Given a tenant retention policy (default 3 years) When an Admin updates the retention period Then the change requires confirmation, is logged with before/after values and effective date, and takes effect on the next purge cycle Given records exceed the retention period and no legal hold exists When the scheduled purge runs Then expired events are removed and a purge summary event (counts, min/max timestamps, segment root hash) is appended; remaining chain integrity verifies via retained checkpoints Given a legal hold is active for a date range When the purge runs Then entries in the held range are not deleted and the skip is recorded Given a subsequent export after purge When generated Then it excludes purged records and includes the latest purge summary in the appendix
Queryable Filtering and Performance
Given an auditor opens the audit log When filtering by date range, user, PatientAnonID, diagnosis/phase/severity/language/visitCadence, block ID, or outcome Then results reflect filters exactly and counts match export results Given a result set up to 10k events When executing a filter query Then the response returns within 2 seconds; up to 100k events return within 5 seconds Given pagination parameters (pageSize ≤ 200 and pageToken) When paging through results Then ordering is stable by TimestampUTC then MonotonicSequence and no entries are missing or duplicated across pages Given filters are applied When exporting Then both CSV and PDF contain only the filtered entries and list the active filters on the cover page
Balance Monitoring & Drift Alerts
"As a clinic lead, I want to monitor allocation balance and receive alerts about drift so that I can ensure our A/B comparisons remain fair and interpretable over time."
Description

Offer a dashboard that visualizes allocation balance across A/B arms by stratum, clinic, and time window, with thresholds for acceptable imbalance. Compute and display real-time metrics (allocation ratios, enrollment counts, predicted balance given queued patients) and send alerts (in-app/email) when drift exceeds thresholds or when strata are under-enrolled. Provide suggestions (e.g., adjust block size/ratio) with approval workflow and record changes for analysis integrity.

Acceptance Criteria
Dashboard Balance Overview by Stratum and Clinic
Given an authenticated Clinic Admin or Researcher When they open the Balance Monitoring dashboard and select a time window (Last 7 days, Last 30 days, or Custom) Then for each clinic and stratum the dashboard displays A and B enrollment counts, the A:B ratio, and an imbalance status against the configured threshold And totals across all displayed rows equal the sum of per-row counts And a data freshness timestamp is visible and is no older than 30 seconds And applying any filter (clinic, diagnosis, phase, severity, language, visit cadence, time window) updates results within 2 seconds And exporting the current view to CSV produces a file containing the visible rows, columns, and applied filters
Real-Time Allocation Metrics Update
Given a new patient is assigned to arm A or B or a queued patient progresses to enrolled When the assignment event is processed Then the affected stratum’s counts and A:B ratio update on the dashboard within 5 seconds And the predicted balance metric includes queued (pending onboarding) patients within 5 seconds of queue changes And if the backend is more than 60 seconds behind, a visible "Degraded" indicator is shown until caught up
Drift Threshold Breach Alerting
Given an imbalance threshold is configured per stratum and time window (default: max 10 percentage points absolute difference between arms) When a stratum’s imbalance exceeds its threshold in the selected time window Then an in-app alert banner appears within 10 seconds to users with alert permissions And an email notification is sent to designated recipients within 60 seconds containing clinic, stratum, counts, ratio, threshold, and a deep link to the filtered dashboard And repeated alerts for the same stratum and window are deduplicated for 30 minutes And acknowledging the alert silences repeats for 2 hours while still logging subsequent breaches
Under-Enrolled Strata Notification
Given a minimum enrollment threshold is configured per stratum and time window (e.g., < N enrollments in last 30 days) When a stratum’s enrollment falls below the configured minimum Then an in-app notice is displayed within 10 seconds and an email is sent within 60 seconds to designated recipients And the notice includes current enrollment, target minimum, time window, and a link to filter the dashboard to the affected stratum And once enrollment rises above the threshold, the notice auto-resolves and resolution is logged
Suggestion Generation and Approval Workflow
Given a drift breach or under-enrollment persists for a configurable duration (default 24 hours) or predicted balance projects a breach within the next block When the system generates a suggestion Then suggestions propose specific changes (e.g., adjust block size within allowed range, modify upcoming block A:B ratio within allowed bounds) with predicted impact and confidence And users with Admin role can Approve, Edit, or Reject a suggestion, providing a required justification And approved changes take effect at the next block boundary and do not retroactively affect already randomized patients And all actions (proposed, approved, edited, rejected) capture user, timestamp, before/after values, rationale, and related alert IDs
Audit Trail and Integrity Reporting
Given any change to thresholds, block sizes, or A:B ratios is made When viewing the audit log Then each entry shows who made the change, what changed, when (ISO 8601 with timezone), why (justification), and the scope (clinic/stratum) And the audit log can be filtered by date range, user, clinic, stratum, and change type and returns results within 2 seconds for up to 10,000 records And the audit log can be exported to CSV and JSON with exact field parity And an integrity report can reconstruct the effective configuration for any past date-time and matches dashboard calculations for that period

Power Planner

A pre-trial calculator that uses your clinic’s historical adherence variance and recovery timelines to recommend sample size, enrollment duration, and stop criteria. Visual power curves estimate when you’ll have enough signal for a trustworthy call, so pilots don’t stall or end underpowered. Spend fewer weeks guessing and more time standardizing what works.

Requirements

Historical Data Ingestion & Normalization
"As a clinic researcher, I want to pull and standardize our historical adherence and recovery data so that the power estimates reflect our real-world variance."
Description

Ingest clinic-level historical adherence and recovery timeline data from MoveMate and CSV uploads, map disparate fields into a unified schema, and automatically handle missing values, outliers, and unit inconsistencies. Compute per-exercise and per-cohort adherence distributions, recovery time distributions, and attrition rates required by the calculator. Enforce PHI minimization and role-based access while maintaining audit logs of imports and transformations. Support configurable lookback windows, data freshness indicators, and nightly incremental refresh so power inputs reflect current practice patterns. Expose a validated, queryable dataset to downstream Power Planner components with clear data quality checks and error reporting.

Acceptance Criteria
CSV Upload Ingestion & Mapping Validation
- Given a well-formed CSV with required headers, when a user uploads it, then rows are ingested at ≥5000 rows/min, mapped to the unified schema without data loss, and a success summary (rows ingested, rows rejected, mapping version) is displayed. - Given a CSV with extra/unmapped columns, when uploaded, then unmapped columns are ignored and reported with counts; ingestion proceeds for mapped fields. - Given a CSV missing any required column, when uploaded, then the upload is rejected with an error listing missing columns and a downloadable corrected header template. - Given rows failing type/format validation, when uploaded, then those rows are quarantined (not ingested) and listed in an error file with row numbers and reasons.
MoveMate API Ingestion with Lookback & Incremental Refresh
- Given a configured clinic source with a 180-day lookback, when the nightly job runs, then only records created/updated within 180 days are fetched via paginated API and upserted, and the run completion timestamp is recorded. - Given no source-side changes since the last run, when the job executes, then zero new records are written and the freshness indicator updates with "No changes" for that source. - Given an admin-triggered 365-day backfill, when executed, then the pipeline processes in idempotent batches (upsert keys: exercise_id, patient_cohort_id, date) without duplicating existing records. - Given API rate limits or transient failures, when encountered, then retries with exponential backoff occur up to 3 attempts; persistent failures mark the run "Failed" with actionable error codes and partial progress saved.
Normalization Rules: Missing, Outliers, and Unit Conversions
- Given numeric fields with unit metadata (e.g., seconds, minutes), when ingested, then values are converted to canonical units defined by the schema and the source unit is preserved in a unit field. - Given missing adherence entries within a session, when processed, then imputation applies rule "missing = 0 reps completed" unless a 'skipped' flag is present; imputed fields are flagged is_imputed=true. - Given outliers beyond domain bounds or >4 SD from cohort mean (e.g., reps >1000 per session), when detected, then values are winsorized to the 99.5th percentile and flagged is_winsorized=true; a run summary reports counts and affected fields. - Given conflicting timestamps or negative durations, when detected, then the records are rejected to quarantine with reason codes and do not pollute aggregates.
Derived Metrics Computation (Adherence, Recovery, Attrition)
- Given ingested exercise sessions, when the nightly metric job runs, then per-exercise adherence distributions (mean, median, SD, p10, p25, p50, p75, p90) are computed per cohort and stored in a metrics table with a version stamp. - Given patient recovery milestones with dates, when processed, then recovery time (days from program start to milestone) distributions per cohort are computed and record counts reconcile to eligible patients within ±1%. - Given cohort enrollment and session data, when processed, then attrition by week is calculated as survival percentages with censoring for active patients and exposed via a queryable view. - Given cohorts with N<20 patients, when metrics are computed, then those cohorts are tagged low_power=true and excluded from default recommendations while remaining queryable.
RBAC & PHI Minimization Enforcement
- Given a Data Analyst role, when querying the dataset, then direct identifiers (name, email, phone, DOB, full dates) are unavailable; dates are truncated to month-level, and IDs are pseudonymized. - Given a Clinician role, when accessing data, then only records for their clinic are visible; cross-clinic access attempts return 403 and are logged. - Given an Admin performing imports, when viewing operational metadata, then audit and pipeline details are visible but PHI remains minimized by policy; bulk exports require explicit approval and are logged. - Given any role querying restricted fields, when attempted, then access is blocked by the data layer policy and the attempt is recorded in the audit log with user and timestamp.
Audit Trail for Imports & Transformations
- Given any ingestion or transformation run, when completed, then an immutable audit record is stored with run_id, actor, source, start/end times, rows read/ingested/rejected, code hash, mapping version, and dataset checksum. - Given a record altered by normalization (imputation, winsorization, unit conversion), when saved, then per-record lineage captures original_value, new_value, rule_id, and timestamp. - Given an Admin requests an audit export for a date range, when the export runs, then CSV/JSON is delivered within 30 seconds for up to 1,000,000 records. - Given audit records older than the 2-year retention period, when the retention job runs, then records are archived to cold storage with integrity verification and an index preserved for discovery.
Data Quality, Error Surfacing, and Freshness Indicators
- Given a nightly publish run, when data quality checks execute, then thresholds are enforced: schema conformance 100%, null rate for key fields <1%, duplicate key rate <0.1%, unit consistency 100%; failures block publish. - Given a blocked publish, when surfaced in the UI, then the dataset status displays "Stale" with the last successful publish timestamp and a list of failing checks with counts and affected tables. - Given a successful run, when published, then freshness displays "Updated <24h ago" with UTC timestamp and source breakdown (MoveMate vs CSV), and the view power_planner.inputs_v1 is updated atomically. - Given typical downstream queries (<=1M rows scanned), when executed against inputs_v1, then p95 query latency is <500 ms; SLO breaches are alerted to on-call.
Power Curve Simulation Engine
"As a lead therapist designing a pilot, I want accurate power curves generated from our data so that I can size the study with confidence."
Description

Provide a performant engine to compute power curves from observed variance and user-specified effect sizes using analytical formulas and Monte Carlo simulation. Support common outcome types used in MoveMate pilots, including continuous outcomes (e.g., change in pain score), binary endpoints (e.g., adherence threshold achieved), and time-to-event (recovery) with log-rank approximations. Allow unequal allocation ratios, repeated-measures designs, and attrition modeling. Return power as a function of sample size and enrollment duration under operational constraints, with deterministic seeding, result caching, and parameter validation. Deliver accurate, unit-tested outputs with clear error messages when assumptions are outside supported bounds.

Acceptance Criteria
Continuous Outcomes Power Curves (Analytical and Monte Carlo)
Given a continuous outcome with specified mean difference or standardized effect, pooled SD, alpha, allocation ratio, and a grid of total sample sizes And Monte Carlo settings of ≥10,000 simulations and a fixed random seed When the engine computes power curves using analytical formulas and simulation Then analytical power at each N matches the closed-form two-sample t-test (equal/unequal n) within ±0.01 And simulated power at each N with per-arm n ≥ 20 is within ±0.02 of analytical And computed power is non-decreasing in N for fixed parameters And outputs include for each grid point: N_total, n_treatment, n_control, allocation_ratio, power_analytical, power_simulated And unequal allocation ratios from 0.25 to 4.0 are accepted and rounded so |n_treatment + n_control − N_total| ≤ 1
Binary Endpoint with Attrition and Allocation
Given a binary endpoint with baseline proportion p_control in (0,1), target effect specified as risk difference, risk ratio, or odds ratio, alpha, allocation ratio, and attrition rates per arm over time And a grid of N_total values and enrollment durations When the engine computes power via normal approximation for two proportions and via Monte Carlo simulation with the specified attrition model Then analytical and simulated power differ by ≤ 0.02 at each grid point And effective sample sizes reflect attrition per arm and are reported And inputs outside supported bounds (p ∉ (0,1), attrition ∉ [0,1), allocation ratio ≤ 0) are rejected with descriptive errors And outputs include N_total, n_per_arm_after_attrition, power_analytical, power_simulated
Time-to-Event (Recovery) via Log-Rank Approximation
Given time-to-event assumptions with exponential survival per arm, hazard ratio > 0, accrual duration, follow-up duration, censoring patterns, alpha, and allocation ratio And a grid of N_total and/or enrollment durations When the engine computes power using the log-rank (Freedman) approximation and validates via Monte Carlo simulation with ≥20,000 trials Then analytical log-rank power and simulated power differ by ≤ 0.03 across grid points And expected event counts, total N, and power are returned for each point And power increases monotonically with the number of events for fixed HR and alpha And unsupported assumptions (e.g., non-exponential hazards) are rejected with a clear error
Repeated-Measures Design (Paired/Pre-Post) Support
Given a repeated-measures continuous outcome with pre-post correlation 0 ≤ ρ < 1, within-subject SD, mean change or standardized effect, alpha, and allocation ratio (including single-arm pre/post) When the engine computes power using the paired t-test (or two-sample change-score test) analytically and via Monte Carlo Then analytical power matches the closed-form paired design within ±0.01 And simulated power differs from analytical by ≤ 0.02 for N ≥ 20 pairs per arm And higher ρ yields higher power all else equal And a missing-at-random rate m in [0,0.5) reduces effective N accordingly and is reflected in outputs
Deterministic Seeding and Result Caching
Given identical model type, parameters, simulation settings, and a fixed random seed When the engine is executed twice Then the numerical outputs (including simulated power) are bit-identical And changing only the seed changes simulated power estimates while analytical outputs remain identical And a cache key composed of model type, parameter hash, simulation settings, seed, and engine version returns cached results on re-run And for a 100-point grid with 10,000 simulations per point, the second (cached) run completes in < 100 ms and the first run in < 10 s on the documented baseline environment And cache is invalidated automatically when the engine version changes or parameters differ
Parameter Validation and Clear Errors
Given invalid inputs When the engine validates parameters Then it rejects and returns structured errors with code, parameter, value, and allowed range, without stack traces, including: - ERR_ALPHA_RANGE for alpha ∉ (0,1) - ERR_VARIANCE_NONPOS for variance ≤ 0 - ERR_PROPORTION_RANGE for proportions ∉ [0,1] - ERR_ALLOC_RATIO_INVALID for allocation ratio ≤ 0 or NaN - ERR_EFFECT_REQUIRED when effect size is missing or zero where not allowed - ERR_HR_RANGE for hazard ratio ≤ 0 And each message includes remediation guidance and a link to documentation
Operational Constraints: Power vs Enrollment Duration
Given clinic operational constraints including max weekly enrollments, caps on concurrent patients, and expected attrition over time And a target power threshold (e.g., 0.80) and alpha When the engine maps feasible sample trajectories over calendar time Then it returns power as a function of enrollment duration with the earliest duration at which power ≥ target flagged And the implied N_total at each duration equals feasible enrollment under constraints (±1 tolerance) and aligns with the sample-size power curve And reducing max weekly enrollments weakly increases the time to reach the same power, holding other parameters fixed
Sample Size & Enrollment Duration Recommender
"As a trial coordinator, I want a clear recommendation for sample size and enrollment duration so that I can plan staffing and timelines."
Description

Given target power, alpha, and minimum detectable effect, compute recommended sample size per arm, expected enrollment duration based on site throughput, and projected calendar timelines. Incorporate clinic-specific adherence variance, attrition, and seasonality to adjust recommendations. Present alternative plans under different operational constraints (e.g., limited weekly enrollments, staggered site starts) and flag underpowered configurations with remediation suggestions. Integrate outputs with MoveMate scheduling primitives to support downstream planning and handoff to pilot execution modules.

Acceptance Criteria
Compute Sample Size Per Arm with Clinic-Specific Variance and Attrition
- Given target_power p ∈ (0,1), alpha a ∈ (0,1), minimum_detectable_effect δ > 0, baseline_outcome_variance σ²_clinic, adherence_variance σ²_adherence, and attrition_rate r ∈ [0,1), when the recommender is run, then it returns integer n_per_arm ≥ 2 and integer enrollment_per_arm = ceil(n_per_arm/(1−r)). - Given the same inputs, when an internal power check is executed via analytic or seeded simulation (≥10,000 iterations), then achieved_power at n_per_arm is within ±0.005 of target_power p. - Given inputs include clinic identifier, when variance is computed, then the effective variance used equals σ²_clinic adjusted by σ²_adherence per documented method and is displayed in the output assumptions block. - Given attrition_rate r, when results are shown, then both pre-attrition n_per_arm and attrition-inflated enrollment_per_arm are displayed with r as a percent. - Given identical inputs and random seed, when the recommender is run twice, then n_per_arm and enrollment_per_arm are identical (deterministic under seed).
Estimate Enrollment Duration and Calendar Timeline with Site Throughput and Seasonality
- Given site_weekly_throughput capacities per site, staggered_site_start_dates, enrollment_caps, blackout_periods (holidays/seasonality), and required enrollment_per_arm, when timeline is computed, then the system outputs enrollment_duration_weeks, first_patient_in_date, last_patient_in_date, and projected_last_patient_out_date (given follow-up duration input). - Given blackout_periods, when the calendar is generated, then no enrollments are allocated on blackout dates and throughput is reallocated to subsequent periods without exceeding per-site caps. - Given staggered_site_start_dates, when enrollment allocation is computed, then no enrollments are assigned to a site before its start date. - Given throughput variability (min/max), when a range is provided, then the system outputs optimistic, expected, and conservative timelines with labeled assumptions. - Given a change to any throughput or blackout input, when recompute is triggered, then updated timelines render within 2 seconds for up to 20 sites and 1,000 total participants.
Generate Alternative Operational Plans Under Constraints
- Given constraints (max_weekly_enrollments, max_concurrent_sites, budgeted_total_sample_cap, latest_end_date), when alternatives are generated, then at least three labeled plans are produced: Fastest, Balanced, Resource-Light. - Given each plan, when results are displayed, then each plan includes: sample_size_per_arm (pre/post-attrition), total_sample, enrollment_duration_weeks, site_count_utilized, expected_power, and key assumption diffs. - Given any constraint is infeasible, when generation runs, then the plan is marked Infeasible with the violated constraints explicitly listed. - Given user selects a plan, when selection is made, then the plan is pinned and persists across sessions with a unique plan_id. - Given identical inputs, when alternatives are regenerated, then the same set of plans and metrics are returned (deterministic under seed).
Underpowered Configuration Detection and Remediation Suggestions
- Given any computed plan with achieved_power < target_power, when results are displayed, then the plan is flagged Underpowered with a red status and a deficit value (target − achieved) to three decimals. - Given an Underpowered plan, when remediation is generated, then at least three suggestions are produced with estimated impacts: (a) increase sample size per arm, (b) extend enrollment duration/add sites within caps, (c) increase MDE or relax alpha; each shows the new achieved_power and deltas. - Given clinic-specified hard caps (sample_cap, site_cap, end_date), when a remediation violates a cap, then it is marked as Not Feasible with the blocking cap named. - Given user applies a remediation suggestion, when recompute runs, then the updated plan’s power meets or exceeds target_power or is explicitly marked still underpowered. - Given suggestions are generated, when exported, then they are included in the plan export (CSV/PDF) with assumptions.
Power Curve Visualization and Export
- Given inputs (alpha, target_power, MDE range, sample_size range), when the user opens Power Curves, then the system renders power vs. sample_size per arm and power vs. MDE curves with axes labeled, target_power line, and recommended point marker. - Given the recommended n_per_arm and MDE, when displayed on the curves, then their coordinates match tabular outputs within rounding tolerance (±1 participant, ±0.001 power). - Given the chart is exported, when the user clicks Export, then a vector SVG and a bitmap PNG are downloadable within 2 seconds at ≥1600px width, with embedded metadata (inputs, timestamp, clinic_id). - Given accessibility settings, when the chart renders, then it passes color-contrast AA and includes text equivalents for markers and lines. - Given the user adjusts MDE range or sample range, when curves update, then recalculation completes within 1 second for default ranges (≤200 points).
Integration with MoveMate Scheduling Primitives for Handoff
- Given an approved plan is pinned, when the user clicks Send to Scheduling, then the system posts a payload containing trial_id, plan_id, per_site_weekly_quotas, start_dates, and enrollment_targets to the MoveMate scheduling primitives API and receives HTTP 2xx. - Given a successful API response, when the handoff completes, then a scheduling_plan entity is created with a link visible in the UI to Pilot Execution and a confirmation toast appears within 1 second. - Given a transient API failure, when retry policy applies, then the system retries up to 3 times with exponential backoff and surfaces a retriable error message if all attempts fail. - Given the plan is updated after handoff, when Resync is triggered, then a delta payload is sent and the scheduling plan reflects updates without duplicating entities. - Given audit logging is enabled, when handoff occurs, then a log entry captures request_id, plan_id, user_id, timestamp, and API status.
Input Validation, Defaults, and Error Handling
- Given user-entered alpha, power, and MDE, when values are invalid (alpha/power not in (0,1); MDE ≤ 0), then the Compute action is disabled and inline error messages specify the required ranges. - Given missing or zero site throughput, when compute is requested, then the system blocks with a specific error indicating missing throughput for the affected sites. - Given clinic defaults exist (variance, attrition, seasonality), when a new session starts, then defaults auto-populate and are visibly labeled with source and version. - Given inputs change, when the user navigates away, then unsaved changes prompt a confirmation to prevent accidental loss. - Given all inputs pass validation, when Compute is clicked, then results generate without fatal errors and a success state is shown within 2 seconds for standard scenarios (≤20 sites).
Stop Criteria Definition & Monitoring Hand-off
"As a clinical lead, I want to define and preview stopping rules so that we avoid running an underpowered or unnecessary trial."
Description

Enable users to define pre-trial stopping rules for efficacy, futility, and safety using group sequential boundaries or Bayesian thresholds and preview their operating characteristics under expected variance. Surface minimum exposure requirements and data review cadences, and generate a structured stop-criteria specification for hand-off to MoveMate’s pilot execution/monitoring module. Estimate likely stop windows on the timeline and highlight risks of false decisions due to small samples, ensuring pilots avoid running longer than necessary or ending underpowered.

Acceptance Criteria
Group Sequential Boundary Setup & Validation
- Given a two-arm pilot, target alpha (≤0.05), target power (≥0.80), max interim looks (2–6), chosen spending function (O’Brien–Fleming, Pocock, or HSD), expected effect size, and SD from clinic history; when the user clicks Calculate; then per-look efficacy and harm Z/p boundaries are computed and displayed in a table and chart. - Then the cumulative alpha spent across looks ≤ target alpha within a tolerance of 1e-6. - Then computed final-look power ≥ target power − 0.01 using the provided effect size and variance assumptions. - When any required input is missing/invalid (out of bounds, non-numeric), then inline errors appear and Calculate/Sample Preview actions are disabled. - When the user saves the configuration, then a versioned rule set with timestamp and immutable ID is persisted and retrievable in the last-3-versions list.
Bayesian Stop Threshold Configuration & Verification
- Given an outcome type (binary or continuous), a model (binomial-logit or normal-normal), specified priors, a clinically meaningful delta, and thresholds (e.g., P(delta>0) ≥ 0.975 for efficacy, P(delta<ε) ≥ 0.90 for futility); when the user clicks Compute; then posterior-based stopping thresholds per interim look are produced and displayed. - Then posterior summaries (mean, 95% CrI) at user-provided sample points match reference calculations within numerical tolerance (|difference| ≤ 0.01 on probabilities, ≤ 0.05 SD units on delta). - When thresholds are logically inconsistent (e.g., efficacy threshold < futility threshold), then a blocking validation error explains the conflict. - When saved, the Bayesian configuration is versioned with model, priors, thresholds, and reproducibility seed recorded.
Operating Characteristics Preview & Power Curves
- Given variance/adherence scenarios (≥1, ≤3) and enrollment plan, when the user clicks Preview OC; then the system runs ≥1000 simulations per scenario and returns: estimated type I error, power, expected sample size (ESS), and stop probabilities per look. - Then type I error estimate ≤ target alpha + 0.005, and power is reported with a 95% CI of width ≤ 0.10. - Then power curves vs maximum sample size and stop-probability-by-look charts render with tooltips and can be exported as CSV. - For Nmax ≤ 500 and ≤ 3 scenarios, the OC computation completes within ≤ 90 seconds or the UI shows a progress indicator and partial results streaming every ≤ 10 seconds.
Minimum Exposure & Data Review Cadence Enforcement
- Given per-patient minimum exposure days and minimum analyzable N per arm per look, when an interim cadence (e.g., every 2 weeks) is scheduled; then any interim violating minimums is blocked and the nearest valid date/enrollment count is suggested. - When historical enrollment and adherence rates are provided, then predicted dates to reach minimums are shown with 80% intervals. - Then the schedule summary lists each planned look with required exposure/N thresholds and an indicator (met/not met) based on current assumptions. - When thresholds are edited, then the schedule and indicators update within ≤ 2 seconds.
Structured Hand-off Spec Generation to Monitoring Module
- Given a validated stop-rule configuration, when the user clicks Generate Spec; then a machine-readable spec (JSON and YAML) conforming to the StopSpec schema (versioned) is produced containing: trial_id, version, analysis population, endpoints, group sequential or Bayesian parameters, boundaries/thresholds, minimum exposure, interim cadence, missing data rules, safety signals, and alert recipients. - Then the spec passes schema validation, includes created_at (current date/time ISO 8601) and a SHA-256 checksum, and is available for download. - When the user sends via API hand-off, then the monitoring module acknowledges with 200 OK and echoes trial_id and version; the transaction is logged with correlation_id. - If the API call fails, then a retry option and a human-readable error are presented without losing the generated spec.
Stop Window Estimation & False-Decision Risk Alerts
- Given configured rules and enrollment assumptions, when the user requests Stop Window Estimate; then for each rule the system returns an 80% prediction interval for the enrollment count and calendar date at which stopping is expected. - Then the UI flags any scenario where simulated false-positive or false-negative risk at any look exceeds a user-set threshold (default 5%) and proposes mitigations (e.g., increase Nmax, delay first look) with quantitative impact estimates. - Then a warning is shown if ESS > 90% of Nmax or if probability of not stopping before final look > 70%, indicating low efficiency.
Scenario Modeling & Sensitivity Analysis
"As a researcher, I want to test scenarios and sensitivities so that I understand how assumptions affect power and risk."
Description

Provide interactive controls to vary key assumptions (effect size, variance, enrollment rate, attrition, adherence uplift) and instantly recompute power curves, sample size, and duration recommendations. Offer side-by-side scenario comparison, saved scenario presets, and sensitivity visualizations (e.g., tornado charts) to identify which assumptions drive power and risk the most. Support exporting scenario sets for stakeholder review and include warnings when assumptions depart materially from observed historical ranges.

Acceptance Criteria
Instant Recompute on Assumption Changes
Given the Power Planner is open with a historical baseline dataset loaded; When the user adjusts effect size, variance, enrollment rate, attrition, or adherence uplift; Then the power curve, recommended sample size, and enrollment duration recompute and render within 1 second without page reload, and the recalculation timestamp updates. Given the user makes up to five rapid successive adjustments within 3 seconds; When the final adjustment is applied; Then outputs reflect the latest values only, with no stale data indicators and no visual flicker longer than 200 ms. Given an invalid value is entered (e.g., negative rate, effect size > 1, attrition > 100%); When the user leaves the field; Then the field shows an inline error, prevents apply/recompute, and retains the last valid outputs unchanged.
Side-by-Side Scenario Comparison
Given a baseline scenario is loaded; When the user adds scenarios to compare (up to 4 total); Then each scenario displays its assumptions, power curve, recommended sample size, enrollment duration, and stop criteria, and all charts share synchronized axes and scales. Given two scenarios differ by only one assumption; When the comparison view is shown; Then the UI highlights the differing assumption and displays numeric deltas versus baseline. Given the user removes a scenario from comparison; When deletion is confirmed; Then the remaining scenarios reflow, axes remain synchronized, and deltas recompute correctly.
Saved Scenario Presets
Given the user has configured a scenario; When Save Preset is invoked with a unique name; Then the preset is saved under the clinic workspace with timestamp and appears in the presets list. Given a saved preset exists; When the user loads it; Then all controls, scenario labels, comparison selections, and chart zoom/pan states are restored exactly. Given the user attempts to save a preset with an existing name; When saving; Then the user is prompted to overwrite or rename; choosing overwrite creates a new version with incremented version tag while preserving prior versions. Given a preset is deleted; When confirmed; Then it is removed from the list and any active comparisons referencing it prompt the user to replace with baseline or remove. Given a user with clinic access logs in on a new device; When opening the presets list; Then the same presets are available, respecting permissions.
Sensitivity Analysis and Tornado Chart
Given a scenario is active; When the user opens Sensitivity; Then a tornado chart displays ranked variables by impact on target power using default perturbations: effect size ±20%, variance ±20%, enrollment rate ±15%, attrition ±10% absolute, adherence uplift ±15% (clinic-editable), and the chart renders within 1 second after any assumption change. Given the user hovers a tornado bar; When the tooltip appears; Then it shows delta power, delta sample size, and the interval used for the perturbation. Given the user clicks a tornado bar; When applied; Then a variant scenario reflecting the perturbed value is created and can be added to comparison with one click, preserving synchronized axes.
Export Scenario Sets for Stakeholder Review
Given 2–4 scenarios are in the comparison set; When the user selects Export; Then the system generates PDF and CSV/JSON files containing scenario names, assumptions, computed outputs (target power, power curve points, recommended sample size, enrollment duration, stop criteria), and a snapshot of the tornado chart, along with clinic name, baseline timeframe, method version, timestamp, and author. Given export content is prepared; When validation runs; Then no patient-identifiable data is included and the export passes PHI checks. Given a share link is requested; When generated; Then authorized stakeholders opening the link see a view-only snapshot matching the exported content. Given up to 10,000 total curve points across scenarios; When exporting; Then the export completes within 10 seconds.
Out-of-Range Assumption Warnings
Given historical ranges are available for the clinic; When an input falls outside the clinic 5th–95th percentile; Then a Warning badge appears with a message showing the historical range and a link to reset to within-range values. Given an input falls outside the clinic historical min–max; When applied; Then a Critical warning appears, recommendations display reduced-confidence messaging, and exporting includes the warning. Given a Critical warning is present; When the user proceeds to save or export; Then the user is required to enter a brief justification that is stored with the scenario and shown in exports. Given a warning is active; When the input is edited back within range; Then the warning clears automatically and confidence messaging returns to normal.
Assumption Transparency & Methodology Notes
"As a compliance-minded stakeholder, I want transparent methods and data provenance so that findings are defensible."
Description

Display the statistical methods, formulas, and data sources used for calculations, including coverage of historical data, confidence intervals around variance estimates, and caveats for small-sample contexts. Provide inline explanations, links to references, versioned methodology notes, and change logs whenever algorithms or defaults are updated. Flag low-data situations and recommend conservative defaults, ensuring decisions are auditable and defensible to IRBs and stakeholders.

Acceptance Criteria
Methodology Panel Visibility and Content
Given the user has completed a Power Planner calculation When they open the Results view Then a 'Methodology' panel is visible on the page And it lists: the statistical test, tail (one- or two-sided), alpha, beta/power, effect size definition, variance estimation method, and formula identifiers used And it lists all data sources with cohort names and coverage dates (start and end) And it shows the timestamp of last data refresh in the clinic workspace time zone
Variance CI Display and Explanation
Given a variance estimate is available for the selected cohort When the variance is displayed Then a 95% confidence interval is shown adjacent to the estimate And a tooltip explains the CI method (analytic or bootstrap) and the sample size used And if n < 20, the CI is labeled 'Low reliability' and a help link explains limitations And the CI updates immediately when cohort/filters change without page reload
Inline References and Source Links
Given the Results or Inputs view is open When the user hovers or taps any methodology term (e.g., alpha, power, effect size, variance) Then a tooltip appears with a plain-language definition and a link to at least one peer-reviewed reference and one internal help article And all external links return HTTP 200 and open in a new tab And each tooltip displays 'Last reviewed' date within the past 12 months
Versioned Methodology and Change Log
Given the Methodology panel is visible When the user expands 'Version & Change Log' Then the current algorithm version is shown in SemVer (e.g., 2.3.1) with release date and summary note And the last 5 versions are listed with diffs of formula/parameter/default/data-source changes And each saved plan displays the version used for its calculations and a 'reproduce with this version' action And upgrades create an immutable record containing plan ID, prior version, new version, user, timestamp, and summary
Low-Data Flagging and Conservative Defaults
Given the app computes required sample size using historical data When historical coverage < 90 days OR per-arm baseline n < 30 OR subgroup n < 15 Then a 'Low data' warning appears at the top of the Methodology panel And the calculator defaults to conservative settings: two-sided test, alpha = 0.025, and uses the upper 95% CI bound of variance in computations And a 'Use conservative defaults' toggle is ON by default and logs user overrides with timestamp and user ID And the warning includes a recommendation to extend enrollment duration with an estimated additional weeks value
Audit Export for IRB Review
Given a plan's results are available When the user selects 'Export Methodology' Then a PDF and a JSON file are generated within 5 seconds containing: algorithm version, formulas, parameters, CI methods and values, data sources and coverage, low-data flags, references, and user acknowledgments And files are named MoveMate_PowerPlanner_Methodology_<planID>_<version>_<YYYYMMDD>.{pdf,json} And each file includes a SHA-256 checksum displayed post-download and stored with the plan for audit And exported content matches the on-screen values for the same version and filters
Update Notification and Version Pinning
Given a new algorithm/defaults version has been released When the user opens any existing saved plan Then a non-dismissed banner announces the update with a one-paragraph summary and a link to the change log And the plan remains pinned to its original version until the user clicks 'Upgrade and Recalculate' And choosing 'Upgrade and Recalculate' records an entry in the change log and updates all outputs; choosing 'Not now' dismisses the banner for 7 days without recalculation And the impact preview shows the delta in sample size and enrollment duration before confirmation
Results Visualization & Export
"As a clinic director, I want polished visuals and exports so that I can share plans with my team and IRB quickly."
Description

Render interactive power curves, enrollment throughput timelines, and sample size tables with accessible color palettes and responsive layouts. Provide quick-share links with role-based permissions and export to PDF, PNG, and CSV, including an executive summary that translates technical outputs into actionable planning guidance. Allow embedding of widgets into MoveMate dashboards and attachment of exports to project records for cross-team alignment.

Acceptance Criteria
Interactive Power Curves Visualization
Given a power analysis with specified effect size, alpha, variance, and sample size range ≤ 10,000 points, When the user opens the Power Curves view, Then the chart renders within 2 seconds and displays at least one series with axes, legend, and title. Given the chart is rendered, When the user hovers or focuses a data point, Then a tooltip shows sample size (integer), power (0–1 with 2 decimals), and alpha (3 decimals). Given the chart is rendered, When the user presses Tab/Shift+Tab and Arrow keys, Then focus moves between series and zoom controls and is announced by screen readers with series name and current value. Given multiple series (up to 5), When displayed, Then each series uses a colorblind-safe palette and maintains contrast ratio ≥ 4.5:1 against the background. Given the user toggles series visibility or zooms, When Reset is invoked, Then the chart returns to default extents within 200 ms.
Enrollment Throughput Timeline Responsiveness
Given enrollment data spanning at least 26 weeks, When the Timeline view loads on viewports 320, 768, 1024, and 1440 px wide, Then axes, labels, and controls adapt without horizontal overflow and maintain minimum 44x44 px tap targets. Given the Timeline view on mobile (≤ 480 px), When the user scrolls vertically, Then the chart remains readable with rotated x-axis labels at 45° and no clipped tick labels. Given a dataset of up to 52 weekly points, When loading, Then initial render completes within 1.5 seconds and panning/zooming interactions respond within 100 ms.
Sample Size Table Accessibility & Sorting
Given computed scenarios, When the Sample Size table renders, Then it includes columns: Scenario, Per-Arm N, Total N, Power, Alpha, Assumptions, Est. Duration (weeks), Last Updated. Given the table is rendered, When the user clicks a column header, Then rows sort ascending/descending with a visible sort indicator and complete within 200 ms for up to 1,000 rows. Given a keyboard-only user, When navigating the table, Then focus order is logical, all interactive elements are reachable, and ARIA role="table" with proper headers associations is exposed to screen readers. Given the user filters by Power ≥ 0.8, When applied, Then only matching rows are shown and a count of results is displayed. Given the user exports the table as CSV, When downloaded, Then the file contains all visible rows in the current sort order with UTF-8 encoding, comma delimiter, and ISO 8601 dates.
Quick-Share Links with Role-Based Permissions
Given a project, When an Editor or Admin creates a quick-share link, Then a unique tokenized URL is generated with selectable role (Viewer, Editor) and optional expiration (1–90 days or no expiry). Given a quick-share link with Viewer role, When accessed by an unauthenticated user, Then the user can view visualizations and executive summary but cannot edit configurations or regenerate exports. Given a quick-share link is set to expire, When the expiration time passes, Then subsequent access returns 403 and is logged with timestamp and IP. Given an Admin revokes a link, When revoke is confirmed, Then the link immediately stops working and the audit log records the action. Given search engines, When crawling, Then shared pages return X-Robots-Tag: noindex and require possession of the token (no open listing).
Multi-format Export with Executive Summary
Given current analysis, When exporting to PDF (A4 or Letter), Then the file includes title, date/time (ISO 8601, UTC), executive summary (≤ 300 words), power curves, timeline, and sample size table on separate pages with page numbers. Given PDF export, When opened, Then text is selectable, vector graphics are preserved, and color palette matches on-screen within ΔE ≤ 3. Given export to PNG, When downloaded, Then the image is 300 DPI at current viewport width, includes legend and captions, and file size ≤ 5 MB. Given export to CSV, When downloaded, Then it contains columns: scenario_id, scenario_name, per_arm_n, total_n, power, alpha, est_duration_weeks, assumptions, created_at_utc, and values match the on-screen table. Given an export is generated, When completed, Then the file name follows pattern PowerPlanner_<projectSlug>_<YYYYMMDDTHHMMSSZ>.<ext>.
Embeddable Widgets for MoveMate Dashboards
Given a project visualization, When the user requests an embed, Then the system provides an iframe snippet with a signed JWT scoped to Viewer and expiring within 24 hours by default. Given the widget is embedded in MoveMate, When loaded, Then it adopts host theme (light/dark) via postMessage within 300 ms and is responsive at 320–1440 px widths without scrollbars. Given the embed token is invalid or expired, When the widget loads, Then a non-identifying error is displayed and no data is leaked. Given security policies, When the iframe is served, Then Content-Security-Policy and sandbox attributes block script injection and third-party navigation. Given the host requests a PNG snapshot via postMessage, When invoked, Then the widget returns a 300-DPI PNG Blob within 1 second.
Attach Exports to Project Records
Given a generated export, When the user selects Attach to Project and confirms, Then the file is stored with metadata: project_id, analysis_id, export_type, version, sha256, created_at_utc, created_by, and tags. Given project permissions, When a Viewer accesses the project record, Then attached exports are listed with version and can be downloaded, but cannot be deleted or replaced. Given multiple exports of the same type, When a new version is attached, Then the prior version remains immutable and the new version becomes Current with a visible badge. Given audit requirements, When any attach, download, or delete action occurs, Then an audit entry is recorded with user, action, timestamp, and IP. Given the project record reaches 200 attachments, When attaching a new file, Then the system prevents the action and informs the user to remove or archive older attachments.

Bias Guard

Live integrity checks that watch for arm imbalances (age, sex, protocol stage), therapist clustering, early drop‑outs, or device gaps (offline heavy users). Flags issues, recommends fixes (re‑balancing blocks, temporary holds, or weighted comparisons), and documents adjustments for audits. Ensures your winner isn’t a mirage of confounds.

Requirements

Cohort Balance Engine
"As a data analyst, I want reliable, up-to-date cohort balance metrics so that I can trust bias detection and outcome comparisons."
Description

Foundational data service that continuously computes and maintains cohort balance metrics across key attributes (age, sex, protocol stage, therapist assignment, device/connectivity profile) using streaming telemetry from MoveMate (rep counts, session events, adherence, connectivity). Supports rolling windows and point-in-time snapshots, handles late and missing data with explicit flags, and exposes a low-latency API (<5s freshness) to Bias Guard detectors and dashboards. Ensures accurate, consistent inputs for bias detection and downstream reporting.

Acceptance Criteria
Sub-5s Freshness for Balance Metrics API
Given streaming telemetry events are ingested in real time When an event with timestamp T is accepted Then the corresponding cohort balance metrics reflect the event and are available via the API within 5 seconds at p95 and 8 seconds at p99 over any rolling 60-minute window And the response includes fields last_updated (ISO8601 UTC) and staleness_seconds; freshness_status is 'fresh' if staleness_seconds <= 5, else 'stale' And a metric balance_freshness_seconds is emitted with p50/p95/p99; an alert triggers if p95 > 5 for 5 consecutive minutes
Rolling Windows and Point-in-Time Snapshots
Given a request with window in {1h,24h,7d} OR as_of=ISO8601 UTC When the API is called for cohort_id X Then for window queries, metrics are computed with inclusive end and exclusive start: start <= event_time < end, and the response includes window_start and window_end (UTC) And for snapshot queries, metrics reflect all events with event_time <= as_of and the response includes as_of (UTC) And repeated queries with identical parameters return identical results; numeric fields match to 4 decimal places And edge-case events exactly at the boundary are counted per the rules above
Late and Missing Data Handling with Explicit Flags
Given late-arriving events up to 24 hours after event_time When such an event is ingested Then affected metrics are corrected within 60 seconds and response includes late_data_applied=true and revision incremented by 1 And ingestion deduplicates by immutable event_id; duplicate arrivals do not change counts (idempotent) And for missing telemetry gaps >5 minutes per device, responses include device_gap=true with missing_minutes aggregated per attribute group and watermark_lag_seconds for the cohort
Attribute Coverage and Validation
Given MoveMate cohort definitions When balance metrics are computed Then outputs include totals and per-attribute breakdowns for: age_band (18–34,35–49,50–64,65+), sex (F,M,Other,Unknown), protocol_stage, therapist_id, device_profile (online_heavy,offline_heavy,unknown) And each breakdown includes counts, proportions, and if applicable standardized mean differences (SMD) vs reference group; numeric values have 4-decimal precision and sum-to-one constraints hold within 0.001 tolerance And unknown/other categories are explicit, never dropped And invalid attribute values or unsupported filters result in HTTP 400 with error_code and message
Low-Latency Read API Contract
Given the read API endpoint GET /v1/cohorts/{cohort_id}/balance with parameters window, as_of, group_by, filters, and page When called under load of 100 requests/second from the same region with payload <=200 KB Then p95 latency is ≤800 ms and error rate is <0.1% And responses include ETag; If-None-Match returns 304 when unchanged; rate limiting is enforced at 200 RPS per client with 429 and Retry-After And responses conform to the published JSON Schema (v1.x); invalid requests return precise 4xx codes; 5xx are retried with exponential backoff headers
Deterministic Recompute, Backfill, and Idempotency
Given a backfill job is run for a historical range [start,end] When the job completes Then recomputed metrics exactly match a deterministic checksum of the same inputs; repeating the job yields identical outputs (bitwise) And live ingestion is at-least-once but outputs are exactly-once via event_id deduplication; no metric increases when only duplicate events are replayed And running backfill does not violate the freshness SLA by more than 120 seconds during the job; an audit log records job_id, actor, time range, and affected cohorts
Consumer Consistency and Versioning
Given Bias Guard detectors and dashboards query the Cohort Balance Engine for the same parameters When both consumers render results Then totals and per-attribute values are numerically equal within 0.0001 tolerance and schema_version matches across services And consumer contract tests in CI pass for every release; any breaking change increments the major version and older versions remain served for at least 90 days with deprecation headers
Live Bias Detector & Alerts
"As a clinic administrator, I want immediate alerts when integrity drifts so that I can intervene before analyses and decisions are compromised."
Description

Realtime integrity checks that evaluate cohort balance and outcomes for drift, including arm imbalances, therapist clustering, early dropouts, and device gaps (offline-heavy users). Applies configurable statistical and rule-based thresholds, deduplicates and groups related signals, and publishes actionable alerts with severity, affected segments, and deep links to recommendations. Integrates with in-app notifications and email, with sub-minute detection latency and rate limiting to prevent alert fatigue.

Acceptance Criteria
Realtime Detection Latency Under Load
Given eligible telemetry/events are ingested in real time When a monitored bias metric crosses its configured threshold Then an alert is generated and queued within 10 seconds of the triggering event timestamp And the in-app notification is delivered within 60 seconds of the event timestamp And detection and delivery timestamps are recorded for latency measurement And over a 1-hour test at ≥2x historical peak ingest, P95 event→in‑app latency ≤ 60s and P99 ≤ 120s
Configurable Thresholds and Rules with Audit
Given an admin opens Bias Guard settings When they create or edit thresholds/rules for arm imbalance, therapist clustering, early dropouts, device gaps, or outcome drift Then inputs are validated (types, ranges, min sample sizes) and rejected with errors if invalid And saved configurations are versioned and persist across restarts And changes take effect for new evaluations within 5 minutes without service downtime And an audit log entry records who, when, what changed, before/after values, and optional rationale And rollback to a prior version is possible and applied within 5 minutes
Alert Content and Actionable Deep Links
Given an alert is emitted Then the payload includes title, severity (Info|Warning|Critical), metric name, stats (n, effect size/test value, p-value or rule value), time window, affected segments, and confidence And at least one recommended fix is included with a deep link to the Recommendations view pre-populated for the affected segment And opening the deep link succeeds (HTTP 200) and loads within 2 seconds on broadband And applying a recommendation records an audit entry with alert_id, user, action, timestamp, and parameters And the alert details screen reflects recommendation status/outcome after application
Alert Deduplication and Signal Grouping
Given multiple signals fire for the same segment within a 10-minute window When alerts are generated Then they are grouped into a single alert containing all findings And identical alerts for the same segment/metric within 30 minutes are deduplicated (no additional notifications) And if a higher-severity finding occurs during the dedup window, the existing alert is updated to the higher severity and a single update notification is sent And all related alerts share a correlation_id for traceability
Alert Rate Limiting and Fatigue Prevention
Given a user has received alerts within the current hour When additional non-critical alerts would exceed the default limit of 5 per hour per channel Then the excess alerts are suppressed and added to an hourly digest And Critical severity alerts may bypass the limit up to 2 per hour per user And the digest is delivered at the top of the next hour with counts and deep links to details And suppressed alerts are visible in the dashboard with a "suppressed by rate limit" indicator
Bias Signal Detection Coverage and Validity
Rule: Arm imbalance triggers when chi-square p<0.05 on allocation by arm (min n≥30/arm) or absolute arm ratio deviation ≥10% when n≥50 Rule: Therapist clustering triggers when Gini coefficient of therapist load across arms ≥0.30 (min n≥10 therapists) or chi-square p<0.05 for therapist×arm distribution Rule: Early dropouts trigger when 7-day retention difference between arms ≥15 pp (min n≥30/arm) or log-rank p<0.05 with hazard ratio ≥1.3 Rule: Device gaps trigger when ≥25% of sessions are offline in a cohort (min n≥100 sessions) and offline share differs between arms by ≥10 pp or Fisher's exact p<0.05 Rule: Outcome drift triggers when rolling 14-day mean change differs from baseline by ≥0.5 SD or Mann-Whitney p<0.05 between arms (min n≥30/arm) Then each trigger produces an alert including metric, statistics, time window, and affected segments
Delivery Channels, Preferences, and Reliability
Given a user has in-app and/or email channels enabled per signal type When an alert is emitted Then an in-app notification appears within 10 seconds of alert emission And an email is sent within 60 seconds with 95% delivery success measured over 24 hours And user channel and signal-type preferences are respected for all deliveries And deliveries are idempotent (no duplicate notifications for the same alert and recipient) And failed deliveries are retried up to 3 times with exponential backoff and are logged
Adjustment Recommendation Engine
"As a lead therapist, I want clear, actionable fixes when bias is detected so that I can correct course without needing deep statistical expertise."
Description

Decision-support module that, upon detected bias, proposes concrete remedies such as rebalancing enrollment blocks, temporary holds on overrepresented segments, weighted or stratified comparisons, and therapist reassignment guidance. Each recommendation includes rationale, projected impact, and simulation previews on recent data, with an approval workflow and safe-apply actions that propagate to scheduling/assignment and analytics configurations. All actions are reversible and versioned.

Acceptance Criteria
Arm Imbalance triggers Rebalance Enrollment Blocks
Given a configured arm-imbalance threshold T_imbalance (default 10%) per stratum (age, sex, protocol stage) And current enrollment shows absolute arm allocation difference > T_imbalance in any active stratum over the last 50 eligible enrollments or the last 14 days (whichever occurs first) When Bias Guard detects the condition Then the engine emits a "Rebalance Enrollment Blocks" recommendation within 2 minutes of detection And the recommendation includes per-stratum target quotas to reach parity within 14 days at current average daily intake (with numeric targets) And the recommendation includes projected time-to-parity and expected reduction in standardized difference with 95% CI And a simulation preview is generated using the last 14 days of data with a fixed random seed for reproducibility And the recommendation references the detection rule ID and data snapshot version And no recommendation is emitted when the observed difference <= T_imbalance
Therapist Clustering prompts Reassignment Guidance
Given therapist assignment concentration exceeds threshold T_cluster (e.g., HHI > 0.18 or Gini > 0.35) in any protocol stage or site for the last 30 days When the condition is detected Then the engine generates a "Therapist Reassignment Guidance" recommendation within 5 minutes And proposes at least two viable reassignment options that satisfy constraints: therapist capacity limits, licensure, availability windows, and patient preferences And each option simulates an expected ≥20% reduction in the clustering metric (HHI or Gini), showing before/after values with 95% CI And safe-apply steps list concrete scheduling changes (patient->therapist assignments) with counts of affected patients and sessions And do-no-harm checks pass: no patient unassigned, no therapist >100% capacity, no violation of time-window constraints And a rollback plan with rollback version ID is included
Early Drop-out spike triggers Temporary Hold and Weighted Comparison
Given the early drop-out hazard ratio HR_early for any segment exceeds T_dropout (default 1.5) during the first 3 sessions over the last 14 days When the condition is detected Then the engine emits a "Temporary Hold + Weighted Comparison" recommendation within 2 minutes And the recommendation specifies hold scope (segment definition) and duration (<=14 days) with justification tied to detection metrics And analytics parameters are provided for weighted or stratified comparisons (including weight formula and strata definition) ready to apply And a simulation preview reports projected bias reduction, variance change, and power; a warning is shown if projected power < 0.80 And safe-apply updates are prepared for enrollment/assignment (to enforce the hold) and analytics configuration (to apply weights/strata) And the hold auto-expires when HR_early drops below 1.2 for 7 consecutive days or at duration end, whichever comes first
Offline-heavy usage triggers Stratified Analysis / Gap Mitigation
Given any segment has >25% of sessions recorded offline with median sync latency >24 hours and key-metric missingness >10% in the last 14 days When the condition is detected Then the engine recommends "Stratified Analysis / Data Gap Mitigation" within 5 minutes And provides options to stratify analyses by online/offline status and/or apply missing-data weighting with specified parameters And triggers optional device-diagnostics nudges for affected users (opt-in) with success/failure tracking And if missingness >30%, the recommendation is marked "Risk—Manual Review Required" and cannot be auto-applied And a simulation preview shows expected changes in estimate bias, coverage, and effective sample size with 95% CI
Recommendation includes Rationale, Impact, and Simulation Preview
Given any recommendation is generated Then it includes a human-readable rationale citing the detection rule, thresholds, and segment/stratum affected And it displays projected impact metrics appropriate to the bias type (e.g., parity delta, HHI/Gini reduction, HR_early change) with numeric values and 95% CI And it includes a simulation preview panel showing inputs (time window, sample size), method, random seed, and data snapshot version And re-running the simulation with the same snapshot and seed reproduces metrics within ±1% relative tolerance And the recommendation panel renders within 3 seconds for datasets <=100k rows and within 10 seconds for datasets <=1M rows
Approval workflow safe-applies changes with versioning and rollback
Given a recommendation is in Pending Approval and the user has Approver role When the user initiates approval Then a change diff summarizes intended updates to scheduling/assignment and analytics configurations, including counts of affected entities And a two-step confirmation is required (review + confirm) with an optional approver comment And upon confirmation, changes are applied atomically; the recommendation status becomes Applied with a new configuration version tag And an audit record is written capturing user, timestamp, before/after snapshot IDs, rationale ID, simulation ID, and diff summary And a one-click Rollback is available that restores the prior version within 1 minute and writes a reversal audit entry And if any apply step fails, the entire operation is rolled back and an error with trace ID is presented; audit status shows Failed—Rolled Back
Propagation updates scheduling and analytics with verification
Given an approved recommendation requires propagation When it is applied Then scheduling/assignment systems reflect changes within 2 minutes and analytics configuration endpoints update within 1 minute And downstream dashboards display the new state on next refresh or within 5 minutes, whichever comes first And automated verification samples at least 5% of affected assignments to confirm new rules are in effect and validates analytics query parameters/schema And alerts are triggered if any propagation SLA is breached, with issue details and remediation guidance
Audit Trail & Evidence Export
"As a compliance officer, I want a defensible record of bias checks and adjustments so that audits can verify methodological integrity and accountability."
Description

Immutable, time-stamped ledger of all integrity checks, thresholds in force, alerts generated, decisions taken, and adjustments applied, including user, version, and data snapshot hashes. Provides de-identified evidence packs exportable to PDF/CSV for payer, IRB, or internal audits, with retention policies and role-based access controls. Integrates with MoveMate’s reporting layer to link audit entries to relevant cohorts and outcomes.

Acceptance Criteria
Immutable, Time-Stamped Ledger for Integrity Events
- Given a Bias Guard event occurs (check executed, alert generated, decision taken, or adjustment applied), when it is recorded, then the ledger entry must include: eventId (UUIDv4), eventType (enum), occurredAt (UTC ISO 8601 with ms), actorId and actorType (system/user), ruleId, ruleVersion, threshold values, outcome summary, cohortIds, dataSnapshotHash (SHA-256 hex), prevHash, and currentHash. - Given the ledger API, when a client attempts to update or delete an existing entry, then the request is rejected with 405 and a separate security audit entry is created; only append operations are allowed. - Given a full ledger segment, when the hash chain is validated end-to-end, then verification passes (no breaks) and reports the index of any integrity failure if present. - Given system and client clock variance, when timestamps are persisted, then server-side monotonic time is used and client-supplied timestamps are stored only as metadata, not as the authoritative occurredAt.
Role-Based Access Controls for Audit Trail and Exports
- Given a user with role Admin, when accessing audit entries and exports, then access is granted for the entire organization scope. - Given a user with role Clinician, when accessing audit entries, then only entries linked to the clinician’s assigned cohorts/patients are returned; export is limited to that scope. - Given a user with role Auditor, when accessing audit entries/exports, then only de-identified views are available across organization scope; no PHI fields are present. - Given a user without required permissions, when attempting to view or export audit data, then the system returns 403 and logs an access-denied audit event with requesterId and reason. - Given any successful export, when the file is generated, then it is watermarked with requester role, userId, request timestamp, and scope filters.
De-identified Evidence Pack Export (PDF/CSV)
- Given filters (date range, event types, cohorts) up to 100k entries, when an export is requested, then CSV and PDF files are generated within 60 seconds 95th percentile and queued with progress updates for larger sets. - Given the export pipeline, when data are transformed, then direct identifiers (names, emails, device IDs) are excluded and patient identifiers are replaced with stable pseudonyms (salted HMAC) consistent within the export. - Given an export completes, when files are delivered, then each file’s SHA-256 checksum is provided and matches upon verification; a manifest entry is written to the audit ledger referencing file names and checksums. - Given the PDF export, when opened, then it contains a cover page with organization, requester, generatedAt (UTC), rulesetVersion, modelVersion, dataSnapshotHash range, filters applied, and entry counts. - Given an export is initiated, when the job starts and completes (or fails), then start/end timestamps, status, row counts, and error (if any) are recorded in the audit ledger.
Rules, Model, and Data Snapshot Versioning and Hashing
- Given any change to integrity thresholds, rule definitions, or model versions, when the change is published, then a new rulesetVersionId is created with a contentHash (SHA-256 over canonicalized config) and appended to the ledger. - Given an audit entry is created, when metadata are persisted, then the rulesetVersionId, modelVersion, and dataSnapshotHash used to produce the event are stored and are immutable. - Given the same input dataset excerpt and canonicalization procedure, when the dataSnapshotHash is recomputed, then it matches the stored hash exactly. - Given a historical audit entry, when the associated rulesetVersion is retrieved, then the exact configuration at the time of the event is returned, not the current one.
Retention Policies and Legal Holds for Audit Artifacts
- Given retentionPeriod.auditEntries = 7 years and retentionPeriod.exports = 3 years, when the nightly retention job runs, then entries older than their configured period are purged or archived and a purge summary entry (counts, date range, jobId) is written to the ledger. - Given a legalHold flag on a cohort or case, when the retention job evaluates matching entries, then those entries are excluded from purge and the exclusion is logged with holdId. - Given a purge failure (partial or full), when the job completes, then an alert is emitted and the failure details (scope, error, retryAt) are recorded; retries occur within 24 hours. - Given access to purge logs, when queried by date range, then results return within 2 seconds for up to 10k records with pagination.
Linkage to Reporting Layer: Cohorts and Outcomes
- Given an audit entry referencing a cohortId, when viewed in the UI, then a deep link navigates to the reporting view for that cohort with the same time window applied. - Given the reporting API, when requesting joined audit+outcome data filtered by cohort and date range, then the first page (50 items) returns within 2 seconds for datasets up to 10k entries and supports sort by occurredAt and eventType. - Given a cohort is archived or renamed, when historical audit entries are retrieved, then the original cohort label at event time is displayed via snapshot metadata. - Given a user clicks from a report outcome metric to audit entries, when the filter is applied, then only entries that contributed to that metric’s calculation are returned (traceability).
Configurable Guardrails & Thresholds
"As a clinic admin, I want to tailor bias sensitivity and alerting to my setting so that I reduce noise while catching meaningful risks."
Description

Administration interface and API to define monitored attributes and subgroups, set alert thresholds and statistical methods (e.g., absolute vs standardized differences), configure routing, escalation, and quiet hours, and select sensitivity presets for small clinics versus research contexts. Validates configurations, versions changes, and supports environment-level defaults to align with clinic policies and regulatory constraints.

Acceptance Criteria
Define Monitored Attributes & Subgroups (UI and API)
- Given an admin with Config:Write permission, when they create a categorical attribute (e.g., protocol_stage) with named subgroups, then the system persists the attribute, returns a success confirmation, and the attribute appears in the monitored-attributes list with correct type and subgroup metadata. - Given an admin, when they create a continuous attribute (e.g., age) with subgroup ranges, then overlapping or out-of-bounds ranges are rejected with field-level error messages and the configuration cannot be saved. - Given a duplicate attribute key is submitted (UI or API), when saving, then the system returns HTTP 409 (API) or inline error (UI) and prevents duplicates. - Given a valid attribute payload is sent to the API, when POST /bias-guard/config/attributes is called, then the service returns 201 Created with the new attribute id and a GET request returns the same definition. - Given an attribute is referenced by any active rule, when a delete is attempted, then the system blocks deletion with a clear dependency error and offers a safe replacement workflow.
Configure Alert Thresholds and Statistical Methods
- Given an admin opens the thresholds panel for an attribute, when they select absolute difference vs standardized difference and enter numeric thresholds, then the selection and values are saved and displayed in the summary. - Given minimum sample size and lookback window fields, when values outside allowed ranges are entered, then validation prevents save and shows allowable ranges. - Given a clinic wants two-sided tests, when two-sided is selected, then alerting triggers only when |effect| ≥ threshold; for one-sided, it triggers only in the configured direction. - Given a valid API payload to PUT /bias-guard/config/thresholds, when saved, then subsequent evaluations use the updated statistical method and thresholds within the next evaluation cycle. - Given per-subgroup overrides are specified, when saved, then evaluation uses subgroup-specific thresholds where defined and falls back to global thresholds otherwise.
Routing and Escalation Rules for Alerts
- Given routing rules by severity and attribute, when a bias alert is generated, then notifications are sent to the configured channels (email, in‑app) and recipients (roles/groups) within 60 seconds. - Given an escalation rule of “unacknowledged > 24h,” when an alert is not acknowledged in 24 hours, then an escalation notification is sent to the next-level recipients and the alert status reflects escalated. - Given deduplication is enabled with a 2-hour window, when repeated alerts for the same signal occur, then only one notification per window is sent while the alert timeline records all events. - Given an API config for routing is updated via PUT /bias-guard/config/routing, when successful, then GET returns the new rules and subsequent alerts honor them. - Given a user without Config:Write permission attempts to change routing, when saving, then the system returns 403 (API) or disables the UI controls.
Quiet Hours and Alert Suppression Windows
- Given clinic-level timezone is set, when quiet hours are configured (e.g., 22:00–06:00), then notifications are suppressed during that window in the clinic’s timezone while alerts continue to be logged. - Given a rule is marked critical, when it fires during quiet hours, then it bypasses suppression and sends notifications with a critical flag. - Given suppression for resolved alerts is set to 24h, when the same condition reoccurs within 24h of resolution, then no notification is sent and the event is appended to the existing alert thread. - Given changes to quiet hours are saved, when viewing the audit log, then the previous and new windows, editor, and timestamp are recorded.
Sensitivity Presets for Clinic vs Research Contexts
- Given presets Small Clinic, Standard, and Research are available, when a preset is selected, then default thresholds, sample-size floors, smoothing windows, and multiple-testing controls are populated. - Given a preset is applied and then individual fields are modified, when saving, then the configuration is marked as Custom with a diff view versus the base preset. - Given a clinic is policy-locked to Research preset, when a user attempts to change restricted fields, then the UI indicates the lock and the API returns 403 for those fields while allowing permitted overrides. - Given a preset is applied via API PATCH /bias-guard/config/preset with id, when successful, then GET reflects the preset and all derived parameters.
Configuration Validation and Error Handling
- Given incomplete or conflicting configurations (e.g., subgroup overlap, missing thresholds), when Save is clicked or API is called, then the system returns 400 with field-specific errors and no partial changes are applied. - Given a full configuration JSON is submitted, when validate-only=true is specified, then the system performs validation and returns a pass/fail report without persisting changes. - Given a configuration references unknown attribute ids, when saving, then the operation fails with a clear reference error listing missing ids. - Given a successful save, when refreshing the page or calling GET, then the persisted configuration matches exactly (structural equality) the submitted payload (excluding server-generated fields).
Versioning, Audit Trail, and Environment-Level Defaults
- Given any configuration change is saved, when viewing history, then a new immutable version with timestamp, editor, environment, and diff is recorded and viewable. - Given a prior version is selected, when Rollback is confirmed, then the system creates a new version identical to the selected one and activates it without data loss. - Given environment-level defaults exist (e.g., org, clinic, environment), when a clinic inherits defaults, then overridden fields are clearly indicated and resets can restore inheritance. - Given regulatory constraints lock certain fields at the environment level, when a user attempts to edit them at clinic level, then edits are blocked with an explanation and 403 via API. - Given exports are requested, when exporting the active configuration, then a signed JSON artifact with version id and checksum is generated for audit purposes.
Bias Dashboard & Visualizations
"As a therapist, I want an at-a-glance view of bias risks in my caseload so that I can adjust assignments and outreach promptly."
Description

Unified UI that presents cohort balance cards, therapist clustering visualizations, dropout funnels, and device connectivity heatmaps alongside current risk status and recent alerts. Offers drill-down from aggregate to de-identified patient and therapist views, shows recommendation previews with expected impact, and supports time-based comparisons. Built on MoveMate’s existing dashboard stack with responsive layouts for web and mobile.

Acceptance Criteria
Cohort Balance Cards Risk Status
- Given a configured time range and arm definitions, When the Bias Dashboard loads, Then each Cohort Balance card shows per-arm N and % for age band, sex, and protocol stage. - Given a test fixture with known imbalances, When cards render, Then risk badges reflect engine-configured thresholds (Green/Yellow/Red) and match expected statuses in the fixture. - Given the user changes the time range or applies filters (arm, sex, age band, protocol stage), When cards refresh, Then values and risk badges update within 2 seconds at p95. - Given recent bias alerts exist for the selected range, When viewing the dashboard, Then the Recent Alerts list shows up to 10 most recent items with timestamp, alert type, affected dimension, and a link to the related visualization.
Therapist Clustering Visualization & Drill‑Down
- Given session data labeled by de-identified therapist and arm, When the Clustering view is opened, Then each therapist shows an engine-provided clustering score and those ≥ the configured threshold are highlighted. - Given a highlighted therapist group is clicked, When drill-down opens, Then a de-identified therapist view appears with aggregate assignment counts by arm and no patient list exposed unless k-anonymity ≥ 5. - Given the user lacks the required role, When attempting drill-down, Then access is denied and no disclosive data is shown. - Given time range or arm filters are changed, When the view refreshes, Then clustering scores recompute and the highlighted set updates within 2 seconds at p95. - Given a clustering alert exists in the selected range, When opening the view, Then an inline banner summarizes the alert and links to a recommendation preview.
Dropout Funnel with Early Dropout Highlighting
- Given the engine-configured definition of early dropout and a selected time range, When the Dropout Funnel is opened, Then steps display counts and conversion rates and the overall early-dropout rate is shown. - Given arm, sex, age band, or protocol stage filters are applied, When the funnel recomputes, Then all counts, rates, and early-dropout figures update within 2 seconds at p95. - Given a funnel step is selected, When drill-down is invoked, Then a de-identified list meeting k-anonymity ≥ 5 is shown with no direct identifiers. - Given Compare Mode is enabled, When viewing the funnel, Then A vs B ranges render side-by-side with absolute and percent deltas for each step and the overall rate.
Device Connectivity Heatmap & Offline-Heavy User Flags
- Given event logs with online/offline status and device type, When the Connectivity Heatmap is opened, Then it displays daily connectivity rate by device type with arm and time filters. - Given configured thresholds for offline-heavy usage and minimum activity, When the dataset is processed, Then users breaching thresholds are counted and a flag banner shows the count and links to impacted lists. - Given a heatmap cell is hovered or tapped, When the tooltip appears, Then it shows date, device type, numerator/denominator, and rate. - Given a flagged segment is clicked, When drill-down opens, Then a de-identified patient list (k-anonymity ≥ 5) filtered to the segment is shown.
Recommendation Previews With Expected Impact
- Given at least one active bias flag, When Recommendations are opened, Then the UI lists recommended fixes (re-balancing blocks, temporary holds, weighted comparisons) sourced from the engine. - Given a recommendation is selected, When its preview loads, Then current vs projected metrics (value and % change) are shown with 95% CI and assumptions, without persisting any change. - Given preview parameters (e.g., hold duration, weighting factor, block size) are adjusted, When the user applies changes, Then projections recompute within 3 seconds at p95 and update the impact visualization. - Given data volume is insufficient for a stable estimate, When preview is requested, Then an "insufficient data" notice is shown and projections are withheld.
Time-Based Comparison Mode Across Visuals
- Given two non-overlapping date ranges A and B, When Compare Mode is toggled on, Then supported visuals (balance cards, clustering, funnel, heatmap) render A vs B with absolute and percent deltas and display sample sizes per range. - Given the app time zone is changed, When Compare Mode is active, Then boundaries and counts re-align to the selected zone and deltas remain consistent. - Given overlapping ranges are selected, When Compare Mode is toggled, Then a validation message explains the conflict and comparison is disabled until resolved.
Responsive Layout & Accessibility
- Given desktop (≥1200px), tablet (768–1199px), and mobile (≤767px) viewports, When navigating the Bias Dashboard, Then visuals render without horizontal scroll and critical KPIs remain visible above the fold on mobile. - Given a keyboard-only user, When interacting with the dashboard, Then all controls are reachable in a logical tab order with visible focus and operable via Enter/Space. - Given a screen-reader user, When focusing charts and controls, Then accessible names, roles, and descriptions are exposed; risk statuses are announced. - Given the risk color palette, When rendered on any supported display, Then color contrast meets WCAG 2.1 AA and color is not the sole indicator (shapes/patterns present). - Given mobile drill-down overlays, When opened, Then they present as full-screen sheets and respond to close actions with first paint within 300ms at p95.

Winner Call

Clear, defensible verdicts with lift percentages, confidence ranges, and safety overlays (form‑flag rate, SafeHold pauses). Choose conservative, balanced, or fast decision modes and get a simple “Adopt A” banner when thresholds are met. Cuts debate, speeds consensus, and makes it easy to share proof with owners and payers.

Requirements

Decision Mode Configuration (Conservative/Balanced/Fast)
"As a clinic lead, I want selectable decision modes so that Winner Calls align with our risk tolerance and speed-to-decision needs."
Description

Provide selectable decision modes that map to predefined statistical and safety thresholds governing when a clear verdict can be called. Each mode sets minimum lift targets, required confidence interval bounds, sample-size floors, and safety overlay behavior (veto vs. warn). Defaults: Conservative (higher lift threshold, 95% CI lower bound > 0, strict safety veto), Balanced (moderate lift, 90% CI lower bound > 0, safety veto with configurable grace), Fast (lower lift, 80% CI lower bound > 0, safety warn). Modes are configurable at clinic and experiment levels, with an API and UI toggle. Configuration is persisted, versioned, and applied consistently across calculations and UI state so the “Adopt A” banner only appears when the active mode’s conditions are met.

Acceptance Criteria
Toggle Decision Mode via UI and Persist per Clinic and Experiment
Given a clinic admin with edit permissions is on Clinic Settings > Decision Mode When they select Balanced and click Save Then the clinic-level decision mode is stored with a new version number and visible as Balanced upon page refresh And experiments created afterward default to Balanced unless explicitly overridden And existing experiments retain their prior mode until individually changed When a user opens an experiment and switches its mode to Fast Then the experiment-level override is persisted and survives reload and new sessions And a mode badge on the experiment header reflects Fast
API Configuration: Set and Retrieve Decision Mode with Versioning
Given a valid API token with config:write scope When PATCH /clinics/{clinicId}/decision-mode with {"mode":"Conservative"} Then the response is 200 with body including mode:"Conservative", version incremented by 1, updatedBy, and updatedAt And a subsequent GET /clinics/{clinicId}/decision-mode returns the same mode and version When PATCH with an unsupported mode value Then the response is 400 with a descriptive validation error When concurrent PATCH requests use a stale version or ETag Then one request succeeds and the other returns 409 Conflict And an audit log entry is created for each successful change
Decision Engine Applies Mode Thresholds to Control Adopt Banner
Given an experiment with calculated lift, confidence intervals, and sample sizes And the active mode is Balanced When the minimum lift meets or exceeds the Balanced lift threshold And the 90% CI lower bound for the selected champion > 0 And sample-size floors for both arms are met And no active safety veto is in effect Then the "Adopt A" banner is displayed When any one of these conditions is not met Then the "Adopt A" banner is not displayed And a reason tag explains the first failing condition
Safety Overlay Behavior by Mode (Veto vs Warn)
Given safety metrics are evaluated (form-flag rate, SafeHold pauses) When mode is Conservative and any safety threshold is breached Then a Safety Veto state is applied and the "Adopt A" banner is blocked with a Safety Veto label When mode is Balanced and a breach occurs Then the "Adopt A" banner is blocked only after the configured grace window elapses without recovery; before expiry, a countdown warning is shown When mode is Fast and a breach occurs Then a Safety Warning badge is shown but the "Adopt A" banner remains eligible if other conditions are met And updating the grace window value takes effect on the next evaluation cycle
Confidence Interval Level and Sample-Size Floors Enforced per Mode
Given mode is Conservative When the 95% CI lower bound for the champion outcome ≤ 0 or either arm is below the sample-size floor Then the verdict cannot be called and "Adopt A" does not appear Given mode is Balanced When the 90% CI lower bound > 0 and both arms meet or exceed the floor Then the verdict may be called subject to lift and safety checks Given mode is Fast When the 80% CI lower bound > 0 and both arms meet or exceed the floor Then the verdict may be called subject to lift and safety checks And switching modes updates the CI level and floor checks immediately on the next computation tick
Mode Changes Mid-Experiment Are Versioned and Recalculated Consistently
Given an active experiment currently in Balanced mode When a user changes the experiment mode to Conservative Then a new configuration version is recorded with actor, timestamp, previousMode, newMode And the decision metrics are recomputed using Conservative mode within 60 seconds And the UI reflects the new mode and any resulting change to banner eligibility And the experiment history shows the mode version used for each verdict snapshot When a historical snapshot is viewed or exported Then it displays the mode and version that were active at the time of that snapshot
Lift and Confidence Engine
"As a physical therapist, I want transparent lift and confidence metrics so that I can trust the Winner Call and explain it to stakeholders."
Description

Implement a statistical engine that computes lift percentages and confidence ranges between variants on predefined outcomes (e.g., adherence rate, form-flag reduction, session completion time). Support small- and large-sample cases with appropriate tests (proportions and means), corrections for imbalance, and optional resampling for robustness. Output includes lift %, confidence interval (low–high), decision-ready flags, and guardrails for insufficient data (e.g., suppress verdict until minimum sample and variance criteria are met). Provide deterministic rounding, units, and clear API contracts for the UI and export services. Support interim analyses with conservative spending rules to reduce false positives from repeated looks.

Acceptance Criteria
API Contract, Fields, Units, and Deterministic Rounding
Given a request for outcome_id="session_time" with units="s", conf_level=0.95, and rounding config lift_dp=1, ci_dp=1, p_dp=4 When the engine returns results Then the JSON includes fields: outcome_id, outcome_type, variant_a, variant_b, lift_percent, ci_low, ci_high, conf_level, p_value, test_method, decision_ready, decision_reasons, units, sample_a.n, sample_b.n, version And lift_percent is rounded to 0.1% using round-half-up; ci_low and ci_high are rounded to 0.1 units using round-half-up; p_value is rounded to 4 decimals And numeric formatting uses '.' as decimal separator regardless of locale And repeated calls with identical inputs produce bit-identical numeric outputs and stable field ordering
Proportion Outcomes (Large Sample) — Lift, CI, Decision Flag
Given outcome_type="proportion" with n_a>=50, n_b>=50, and each arm having expected successes>=5 and failures>=5 When the engine computes effect between variants A and B Then it returns lift_percent=(p_b - p_a)/p_a*100 and a two-sided 95% CI appropriate for proportions And test_method and ci_method are included (e.g., "two-proportion z", "Newcombe-Wilson") And if the 95% CI excludes 0 and p_value<=0.05, decision_ready=true; otherwise decision_ready=false And on a provided golden dataset, |lift_percent_error|<=0.1 percentage points and |CI_error|<=0.1 percentage points versus the reference
Proportion Outcomes (Small Sample and Edge Cases) — Exact Methods and Guardrails
Given outcome_type="proportion" where min(n_a, n_b)<50 or any arm has min(successes, failures)<5 When the engine computes effect Then it uses small-sample-appropriate methods (e.g., Fisher's exact for p_value and Wilson/score-based CI with Haldane–Anscombe correction) and reports lift_percent and 95% CI And decision_ready=false unless n_a>=50 and n_b>=50 and the 95% CI excludes 0 And if any arm has zero variance (all successes or all failures), decision_ready=false and decision_reasons includes "insufficient_variation"
Mean Outcomes — Welch, Directionality, and Units
Given outcome_type="mean" (e.g., session completion time in seconds) with n_a>=30, n_b>=30, and non-zero variance in both arms When the engine computes effect Then it uses Welch's t-test to produce a two-sided 95% CI for the mean difference and returns lift_percent=(mean_b - mean_a)/mean_a*100 and CI in units="s" And if lower_is_better=true, the sign of lift_percent is interpreted accordingly for decision logic; if the CI indicates improvement in the configured direction and p_value<=0.05, decision_ready=true; otherwise false And on a golden dataset, |lift_percent_error|<=0.1% and |CI_error|<=0.1 units versus the reference
Allocation Imbalance and Stratified Correction
Given allocation ratios up to 80:20 and a configured stratification key with up to K<=5 strata When the engine computes effect Then for proportions it applies Mantel–Haenszel or IPW weighting; for means it applies weighted mean differences, and returns stratum_weights and method annotations And on a synthetic no-effect dataset with imbalanced allocation, the absolute bias of lift_percent is <=0.2 percentage points and p_value>0.05 And if a requested stratum has n<5 in any arm, the engine collapses/drops that stratum and adds decision_reasons including "stratum_insufficient_n"
Resampling Robustness and Reproducibility
Given resampling.enabled=true with bootstrap_iters=5000 and random_seed=42 When the engine computes resampled intervals Then it returns ci_boot_low and ci_boot_high in addition to the analytic CI and sets test_method_resample="bootstrap" And repeated runs with the same seed return identical ci_boot_low/high; runs with different seeds vary by <=0.3 percentage points on the golden dataset And if resampling.enabled=true but min(n_a, n_b)<10, the engine skips resampling and includes decision_reasons containing "resample_skipped_small_n"
Interim Analyses with Alpha Spending
Given planned_looks=L>=2, current_look=l where 1<=l<=L, alpha=0.05, and spending_rule="obrien_fleming" When the engine evaluates significance Then it provides adjusted_alpha for the current look such that cumulative spent alpha across looks <=0.05 and adjusted_alpha_1 < adjusted_alpha_L And decision_ready=true only if p_value<=adjusted_alpha and the 95% CI excludes 0 and all guardrails are satisfied; otherwise decision_ready=false with decision_reasons including "alpha_boundary_not_crossed" when applicable And the payload includes interim fields: planned_looks, current_look, adjusted_alpha, cumulative_spent_alpha, and boundary_crossed
Safety Overlay Integration (Form-Flag and SafeHold)
"As a clinician, I want safety overlays incorporated into Winner Calls so that no variant is adopted if it increases patient risk."
Description

Integrate patient safety signals—form‑flag rate and SafeHold pauses—into the decision logic as gating and weighting factors. Define thresholds relative to baseline and across variants; when exceeded, either veto the call or downgrade the decision per active mode. Compute and display safety deltas alongside lift, stratify by exercise type and cohort risk level, and surface rationale text explaining any blocks. Support clinician override with required attestation, audit logging, and time‑bound exception windows. Ensure safety overlays propagate to exports and the audit trail to maintain defensibility.

Acceptance Criteria
Mode-Aware Safety Gating on Winner Call
Given decision mode = Conservative and per-variant safety deltas are computed by stratum (exercise type, cohort risk) When any stratum’s form-flag delta% > configured.conservative.formFlag.hardLimit OR SafeHold delta% > configured.conservative.safeHold.hardLimit Then the Winner Call is vetoed (no “Adopt <Variant>” banner), the decision status = Vetoed, and the rationale lists each failing stratum, metric, observed value, and limit Given decision mode = Balanced and safety deltas are computed by stratum When any stratum exceeds configured.balanced.[metric].hardLimit Then veto the Winner Call and show rationale When any stratum exceeds configured.balanced.[metric].softLimit but ≤ hardLimit Then downgrade decision to Needs Review (no “Adopt” banner) and show rationale Else if lift meets adoption thresholds Then show “Adopt <Variant>” banner Given decision mode = Fast and safety deltas are computed by stratum When any stratum exceeds configured.fast.[metric].severeLimit Then veto the Winner Call and show rationale When any stratum exceeds configured.fast.[metric].softLimit but < severeLimit Then downgrade decision to Needs Review and show rationale Else if lift meets adoption thresholds Then show “Adopt <Variant>” banner
Safety Deltas Display with Stratification and Rationale
Given variants have been evaluated against baseline per stratum Then for each variant display alongside lift: form-flag delta% and SafeHold delta%, 95% CI, and sample size (n) for Overall, per exercise type, and per cohort risk level And values are rounded to one decimal place, units shown as %, and tooltips include metric definitions When any stratum triggers a veto/downgrade Then show an inline rationale panel listing: mode, metric, stratum, observed value, threshold, comparison operator, and outcome (Vetoed/Needs Review) When no safety limits are breached Then show a “Within safety limits” indicator And all displayed safety values match the values used in decision logic within ±0.1%
Configurable Thresholds and Mode Policies
Given an admin updates configuration for safety overlay thresholds (per mode soft/hard/severe limits for form-flag and SafeHold, minimum sample size, CI width tolerance) When the config is saved Then all pending and future Winner Calls are re-evaluated using the new config within 60 seconds And a config-change event is written to the audit log with prior and new values, user, and timestamp And per-stratum gating uses matched baselines (same exercise type and cohort risk) with weighted overall aggregates per configured weighting rule When configuration is invalid (e.g., softLimit ≥ hardLimit) Then the save is rejected with a clear validation error and no changes are applied
Clinician Override with Attestation and Time-Bound Exception
Given a Winner Call is Vetoed or Needs Review due to safety overlays When a user with Override Winner Call permission selects Override Then a modal requires: free-text attestation (≥15 characters), risk acknowledgement checkbox, and selection of an exception window within configured.maxExceptionDuration And submission without all required elements is blocked When the override is confirmed Then the decision state becomes Adopt <Variant> (Overridden) until the exception expires, and the UI labels the decision as Overridden with expiry timestamp And an audit record is created with user, attestation text, metrics at time of override, thresholds, mode, and expiry And exports/API include override=true, attestorId, attestationText, overrideTimestamp, expiryTimestamp When the exception window expires Then the decision automatically reverts to the safety-gated state and override metadata is preserved in the audit trail
Export and Audit Trail Propagation of Safety Overlays
Given a user exports Winner Call results via CSV or API Then for each variant include: decision outcome, decision mode, lift%, lift CI, form-flag delta%, SafeHold delta%, per-stratum pass/fail summary, rationale text (if any), thresholds applied, baseline definition, and override metadata (if any) And numeric fields use consistent units and nulls for unavailable metrics; headers are documented and stable When a decision is recomputed due to config changes or data refresh Then the audit trail appends a new version with full gating inputs/outputs, preserving prior versions immutably with timestamps and user/process IDs And export endpoints include the current version by default and allow requesting a specific versionId
Insufficient or Noisy Data Safeguards
Given per-stratum baseline sample size < configured.minSample OR CI width > configured.maxCIWidth OR a safety metric is unavailable When generating the Winner Call Then in Conservative and Balanced modes treat the stratum as failing safety (veto); in Fast mode downgrade to Needs Review And show rationale “Insufficient data” with the specific deficiency (e.g., n, CI width, missing metric) And display metric values as — with tooltip “Insufficient data”; exports use null for the missing values And no “Adopt” banner is shown unless all applicable strata meet minimum data quality and safety limits
Adopt A Banner and Shareable Evidence
"As a clinic owner, I want a simple adoption banner and shareable proof so that I can communicate defensible outcomes quickly to owners and payers."
Description

Render a prominent, context-aware banner (e.g., “Adopt A”) when decision thresholds are satisfied, with direct actions to adopt and to share evidence. Generate a de‑identified, branded evidence package (PDF and secure link) containing lift %, confidence ranges, safety overlays, sample sizes, active decision mode, eligibility criteria, timestamps, and method/version identifiers. Ensure accessibility, mobile responsiveness, and localization readiness. Exclude PHI and include a verification QR code linking to a live evidence page. Provide role-based access controls for sharing with owners and payers.

Acceptance Criteria
Winner Banner Renders on Threshold Met
Given the active decision mode is set to Conservative, Balanced, or Fast and decision thresholds are satisfied with Variant A as the winner When the clinician opens the Winner Call view Then a prominent banner appears at the top with the text "Adopt A" and displays the active decision mode label And the banner contains two primary actions: "Adopt" and "Share Evidence" And if thresholds are not satisfied, the banner is not rendered And if Variant B is the winner, the banner text reads "Adopt B" And if live thresholds drop below required values, the banner hides within one UI refresh cycle
Evidence Package Content and Formats
Given the clinician clicks "Share Evidence" from the banner When the system generates the evidence package Then a de-identified, branded PDF is produced and a secure share link is created And both the PDF and the live evidence link include: lift percentage, confidence range, safety overlays (form-flag rate, SafeHold pauses), sample sizes (per arm and total), active decision mode, eligibility criteria, timestamps (analysis window and generation time), and method/version identifiers And the PDF uses MoveMate branding and optional clinic branding (if configured) without introducing PHI And the PDF embeds a verification QR code that resolves to the live evidence page And the secure link is served over HTTPS and requires a valid access token to view
QR Verification and Live Evidence Page Consistency
Given a user scans the verification QR code in the evidence PDF When the live evidence page loads Then it displays method/version identifiers, decision mode, lift %, confidence range, safety overlays, sample sizes, eligibility criteria, and timestamps that match the values encoded in the PDF And a "Verified" indicator is shown when identifiers match And if any identifier mismatches, an integrity warning is displayed and adoption/share actions are disabled until regenerated
PHI Exclusion and De-identification Enforcement
Given evidence is generated for any cohort When the PDF and live evidence page are produced Then no PHI/PII (e.g., names, full dates of birth, phone numbers, emails, MRNs, device IDs, face images) is present in the content or metadata And only aggregated, de-identified statistics are shown And automated PHI scanning on the artifacts returns zero findings And if PHI is detected pre-release, generation is aborted and the user sees a non-PHI error message with remediation guidance
Role-Based Sharing and Access Control
Given the current user role is Clinician-Owner or Admin When the user selects "Share Evidence" Then share controls are enabled and a time-bound, permission-scoped link is created And recipients with Owner or Payer roles can access the live evidence after authenticating or presenting a valid token And users without required roles receive HTTP 403 with no evidence data returned And the owner can revoke the link and subsequent access attempts fail within 60 seconds of revocation And all share and access events are audit-logged with timestamp, actor, and outcome
Accessibility Compliance for Banner and Sharing
Given a user navigates the Winner Call banner and Share Evidence flow using assistive technologies When interacting via keyboard and screen reader Then all operable elements are reachable by keyboard with a visible focus order that matches visual order And accessible names/roles/states are exposed and the banner is announced with an appropriate live region And text and interactive elements meet WCAG 2.2 AA contrast (>= 4.5:1 for text) And QR images include alt text describing their verification purpose And the generated PDF is tagged for reading order and link descriptions
Mobile Responsiveness and Localization Readiness
Given device widths of 320px, 375px, 768px, and 1024px When viewing the banner and Share Evidence flow Then the layout adapts without horizontal scrolling and primary actions remain visible; tap targets are >= 44x44dp And the PDF and secure link are retrievable and readable on mobile devices And all user-facing strings are sourced from i18n resources with en-US available and placeholders for additional locales And dates, times, and numbers render according to the active locale And RTL layout renders without clipping or truncation
Audit Trail and Parameter Versioning
"As a payer liaison, I want a complete audit trail of Winner Calls so that decisions are defensible, reproducible, and compliant during audits."
Description

Capture immutable logs of all inputs, parameters, and outputs that contribute to a Winner Call, including dataset snapshots (hashes), inclusion/exclusion rules, decision mode, thresholds, statistical method version, safety rules, user actions (e.g., overrides), and generated artifacts. Enable retrieval by experiment, exercise, or time window; support export for compliance reviews; and allow deterministic replays to reproduce decisions. Implement tamper‑evident storage with retention policies aligned to clinic and payer requirements.

Acceptance Criteria
Log Complete Winner Call Inputs and Outputs
Given a Winner Call is executed for an experiment and exercise When the decision is finalized (including any user override) Then the system records one immutable audit event containing: experiment_id, exercise_id, decision_mode, thresholds, inclusion/exclusion rules, safety_rules_version/signature, statistical_method_version, user_id, request_id, start/end timestamps And the audit event stores computed outputs: winner_label, lift_percent, confidence_interval, safety_overlay metrics (form_flag_rate, safehold_pause_count), and generated_artifact references with content hashes And user actions (override/confirm) are captured with reason text, actor_id, and timestamp And the event is assigned a unique audit_id and write-once checksum And the event is retrievable via audit API within 5 seconds of write
Dataset Snapshot Hashing for Reproducibility
Given a Winner Call requires input datasets derived from queries When the evaluation job starts Then the system materializes each dataset query to a canonical, ordered serialization and computes a SHA-256 hash And stores the query text, execution timestamp, row_count, and dataset_hash per source And stores an aggregate_manifest_hash computed over all dataset_hash values and parameter versions And any subsequent replay with the same parameters must produce identical dataset_hash and aggregate_manifest_hash values
Tamper-Evident Storage Integrity
Given audit events are written to the audit store When an event is persisted Then the event is chained with the previous event via a hash pointer and signed by the service key And any attempted modification or deletion by non-privileged roles is denied and logged as a security event And a daily integrity verification job validates the chain (no missing links, signature valid) and emits a pass/fail report And redactions (when legally required) create a tombstone record preserving metadata, reason_code, and hash link continuity
Multi-Dimensional Audit Retrieval (Experiment/Exercise/Time)
Given an auditor queries the audit API When filtering by experiment_id, exercise_id, and a time window Then only matching records are returned, sorted by timestamp descending by default And the API supports pagination (limit/offset) and returns total_count And for result sets up to 10,000 records, the p95 latency is <= 2 seconds And each record includes minimal fields for listing and a link to full details
Deterministic Replay Produces Identical Winner Call
Given an audit_id of a prior Winner Call When a deterministic replay is requested Then the system reconstructs the exact environment: parameter versions, safety rules, decision mode, thresholds, and dataset snapshot identified by stored hashes And re-executes the decision pipeline using the recorded statistical method version And the replayed outputs (winner_label, lift_percent, confidence_interval, safety overlays) match the original within defined tolerance (exact match for hashes; numeric fields within 0.0001) And if any dependency is missing or mismatched, the replay fails with a clear error and no partial state is committed And the replay action is itself logged as a new audit event linked to the original
Compliance Export Package Generation
Given a compliance reviewer requests an export for a specified experiment/exercise and time window When the export job runs Then the system produces a package containing: raw audit events (JSONL), parameter/version manifests, dataset and artifact hashes, integrity chain proofs, and a human-readable summary (PDF) And optional PHI fields are excluded or pseudonymized per selected profile, with a disclosure manifest included And the package is signed, assigned a checksum, stored securely, and a time-limited (7 days) download URL is generated for authorized users And the export action and access are logged in the audit store
Retention Policy Enforcement
Given clinic and payer retention policies are configured (e.g., 7 years active, 3 years archive) When an audit event reaches its retention threshold Then the system applies the configured action (archive or purge) automatically within 24 hours And purges produce a verifiable proof-of-deletion record (hash of deleted content and timestamp) preserved in the ledger And legal holds override deletion until the hold is released, with hold reasons tracked And changes to retention settings apply prospectively and are logged with actor, old_value, new_value, and timestamp
Data Eligibility and Quality Rules
"As a therapist, I want clear eligibility and quality rules so that Winner Calls are based on trustworthy data and I know how to reach a decision state."
Description

Define and enforce data eligibility criteria for inclusion in Winner Calls: minimum sample sizes per arm, per‑session rep minimums, device integrity and signal quality checks, exclusion of outliers and corrupted sessions, and handling of missing data. Provide real‑time counters showing eligible vs. excluded data with human‑readable reasons. Block decisions when criteria are unmet and surface guidance (e.g., additional sessions required) to reach eligibility. Allow clinic-level overrides within safe bounds and log all changes to the audit trail.

Acceptance Criteria
Real-time Eligibility Counters with Human-Readable Reasons
Given an active A/B comparison and eligibility rules are configured When a session is ingested, edited, or reprocessed Then the eligible and excluded counters per arm update within ≤2 seconds at p95 latency And the sum of eligible_count + excluded_count equals total_ingested_count per arm And excluded breakdown shows counts by reason in {Below rep minimum, Low signal quality, Outlier, Missing data, Manually excluded} with human-readable labels
Minimum Per-Arm Sample Size Gating of Winner Call
Given T_arm_min_sessions is configured And arm A has eligible_sessions = a and arm B has eligible_sessions = b When min(a, b) < T_arm_min_sessions Then the Winner Call decision action is disabled and any “Adopt A” banner is suppressed And a guidance panel displays the per-arm deficit (T_arm_min_sessions − a or − b) and the message “Collect additional eligible sessions to reach minimum sample size” When min(a, b) ≥ T_arm_min_sessions Then the decision action becomes enabled without page reload and the blocker message disappears
Per-Session Rep Minimum Enforcement
Given T_rep_min is configured When a session’s valid_reps < T_rep_min Then the session is excluded with reason “Below rep minimum (valid_reps/T_rep_min)” and does not contribute to eligible counts When valid_reps ≥ T_rep_min Then the session passes this rule and remains subject to other eligibility checks
Device Integrity and Signal Quality Gate
Given device_integrity_check = pass|fail and signal_quality_score is computed per session And thresholds T_quality_min and optional T_drop_max are configured When device_integrity_check = fail OR signal_quality_score < T_quality_min OR drop_frame_rate > T_drop_max Then the session is excluded with reason “Low signal quality/Device integrity failed” and the measured metrics are stored for audit When all integrity and quality thresholds are met Then the session passes this rule and remains subject to other eligibility checks
Outlier and Corrupted Session Exclusion
Given outlier_method = zscore and z_cutoff is configured for the primary outcome metric And outlier detection is computed within each arm after applying rep-min and quality gates When a session’s metric has |z| > z_cutoff within its arm Then the session is excluded with reason “Outlier” When the CV pipeline flags a session as corrupted (e.g., missing/invalid media or metadata) Then the session is excluded with reason “Corrupted session”
Missing Data Handling and Eligibility Recalculation
Given missingness_threshold M% is configured for required metrics When a session’s primary metric missingness > M Then the session is excluded with reason “Missing data” When missingness ≤ M Then the missing values are imputed using the per-arm median for that metric and the session remains eligible (subject to other rules) And the number of sessions with imputed values is counted and exposed in the session details
Clinic-Level Overrides Within Safe Bounds and Audit Logging
Given a user with Clinic Admin or delegated Override permission When they lower T_arm_min_sessions by more than 20% from the clinic default Then the system rejects the change and shows an error without altering eligibility When they lower T_arm_min_sessions within ≤20% and provide a non-empty reason (≥10 characters) Then the change is saved, eligibility is recomputed, and counters update within ≤2 seconds When they force-include an excluded session Then the number of force-included sessions per arm must be ≤5% of eligible sessions or ≤5 sessions (whichever is smaller), otherwise the action is blocked And all overrides (who, when, before→after, reason, scope) are written to the audit trail And any Winner Call affected by an override displays an “Overridden” badge with a link to the audit record
Dashboard Integration and Notifications
"As a therapist, I want Winner Calls visible in my dashboard with notifications so that I can adjust care pathways promptly when a variant proves better or unsafe."
Description

Embed Winner Call cards in therapist and admin dashboards showing current leader, lift %, confidence range, safety status, and projected time/sample remaining to decision. Provide in‑app, email, and optional SMS notifications when a decision is reached, blocked by safety, or reversed by new data. Support filtering by program, exercise, cohort, and date. Ensure responsive layouts across mobile and desktop to align with MoveMate’s telehealth workflows.

Acceptance Criteria
Therapist dashboard shows embedded Winner Call card
Given a therapist is logged in and has at least one active Winner Call evaluation on assigned programs When the therapist opens the dashboard Then the Winner Call card displays: current leader (A or B), lift percentage (signed, one decimal), confidence range (two percentages), safety status (form-flag rate and SafeHold status), and projected time and sample remaining to decision And Then all displayed values match the Winner Call backend for the evaluation at render time And Then values update within 60 seconds of a backend change or on page reload
Admin dashboard filtering by program, exercise, cohort, and date
Given an admin is viewing the Winner Call section on the admin dashboard When the admin applies filters for Program, Exercise, Cohort, and Date Range Then only cards matching all selected filters (logical AND) are shown And Then clearing a filter immediately updates results to reflect the change And Then the Date Range filter is inclusive of the start and end dates using the user’s timezone And Then an empty state is shown when no cards match the filters
Responsive layout across mobile and desktop
Given a device width of 414 px or less When the dashboard renders Then Winner Call card metrics stack vertically without horizontal scrolling, and all required metrics are visible without truncation And Given a device width of 1280 px or more When the dashboard renders Then metrics lay out side-by-side without truncation and the card fits within the dashboard grid And Then interactive targets are at least 44x44 px and text is at least 14 px for readability
In-app notification when a decision is reached
Given a Winner Call evaluation crosses the decision threshold When the backend records the decision event Then assigned therapists and relevant admins receive an in-app notification within 60 seconds And Then the notification includes: Adopt A/B, lift %, confidence range, and safety status at the time of decision And Then selecting the notification deep-links to the corresponding Winner Call card on the dashboard And Then duplicate in-app notifications are not created for the same event
Notifications and UI when blocked by safety
Given an evaluation becomes blocked by safety (e.g., high form-flag rate or active SafeHold) When the backend records the safety block event Then in-app and email notifications are sent to intended recipients according to their notification preferences within 2 minutes And Then the Winner Call card shows a "Blocked by Safety" status with the blocking reason(s) and current form-flag rate or SafeHold indicator And Then no Adopt banner is shown while the evaluation is blocked
Email and optional SMS delivery with preferences
Given a user has email enabled When a decision reached, safety block, or reversal event occurs Then an email is sent within 2 minutes containing the event type, current leader (if applicable), key metrics, and a link to the card And Given a user has opted in to SMS and has a valid phone number on file When the same events occur Then an SMS is sent within 2 minutes containing the event type, leader (if applicable), and a shortened link And Then users who disable SMS or email in preferences stop receiving those channels immediately And Then invalid or missing phone numbers suppress SMS sending without affecting email delivery
Notification and card update on decision reversal by new data
Given a previously decided Winner Call evaluation is reversed by new data When the backend records the reversal event Then in-app, email, and (if opted-in) SMS notifications are sent per user preferences within 2 minutes, indicating previous leader, new leader, and the reversal timestamp And Then the dashboard card updates to show the current leader and a reversal indicator with the reversal timestamp And Then only one set of notifications is sent per reversal event

Safety Sentinel

Continuous safety monitoring across both arms that watches high‑risk form flags, pain screeners, Range Guard nearing, and SafeHold events. Automatically pauses a risky variant, routes patients to a safer fallback, and notifies the clinician with context clips. Protects patients first while keeping your trial analyzable and compliant.

Requirements

Dual-Arm Risk Detection
"As a patient performing home exercises, I want the app to automatically detect unsafe arm movements in real time so that I can correct my form before I get hurt."
Description

Provide on-device, real-time computer-vision monitoring of both upper limbs during exercises to identify high-risk form patterns such as excessive joint angles, abrupt velocities, compensations, and left/right asymmetry. The engine computes joint kinematics, fuses smoothing and confidence scoring, and emits structured “risk events” with severity and rationale. It integrates with MoveMate’s rep counter and session timeline, runs offline on typical smartphones within strict latency limits, and exposes an event stream for Safety Sentinel automations.

Acceptance Criteria
Angle Threshold Breach Detected on Either Arm
Given an exercise definition with per-joint angle limits (e.g., shoulder abduction ≤ 120°, elbow flexion ≤ 150°) and both arms tracked at ≥0.70 confidence When the computed angle for any monitored joint on either arm exceeds its configured limit by >5° for ≥120 ms Then the engine emits a “risk event” within 150 ms of the breach with type = "AngleLimitBreach" And severity = "High" if excess >10°, else "Medium" if >5° and ≤10° And the event includes side (Left|Right), joint name, measuredAngleDeg, limitDeg, overByDeg, timestampMs, frameIndex, and confidence ≥0.70
Abrupt Velocity Spike Detected and Classified
Given sampling at ≥30 FPS and velocity/jerk thresholds configured (e.g., angular velocity > 400°/s or jerk > 8000°/s²) When a monitored joint on either arm exceeds the configured velocity threshold for ≥2 consecutive frames or jerk threshold within 50 ms Then the engine emits a “risk event” with type = "AbruptVelocity" within 150 ms of detection And severity = "High" if velocity > 600°/s or jerk > 12000°/s²; "Medium" if above threshold but not High And the event includes side, joint, peakVelocityDegPerSec, peakJerkDegPerSec2, thresholdValues, timestampMs, and confidence
Left/Right Asymmetry Across Reps
Given at least 3 valid reps with both sides tracked at ≥0.70 confidence and asymmetry thresholds configured (e.g., peak angle delta > 15° or peak velocity delta > 25%) When the side-to-side difference in the chosen kinematic metric exceeds the configured threshold for ≥2 consecutive reps Then the engine emits a “risk event” with type = "Asymmetry" after the second offending rep within 200 ms And severity = "High" if delta > 25° (or >40% for percentage thresholds); "Medium" if above threshold but not High And the event includes leftMetric, rightMetric, deltaValue, metricName, offendingRepIds[], timestampMs, and confidence per side
Compensation Pattern Detection (Trunk Lean or Shoulder Shrug)
Given compensation rules active for the exercise (e.g., trunk lateral lean > 10° for ≥200 ms, scapular elevation > 15° for ≥150 ms) with torso and scapular landmarks tracked at ≥0.65 confidence When any configured compensation condition is met during a rep Then the engine emits a “risk event” with type = "Compensation" within 200 ms of condition onset And the event includes compensationType (e.g., TrunkLean, ShoulderShrug), measuredValue, thresholdValue, side (if applicable), timestampMs, and confidence And severity = "Medium" by default, upgraded to "High" if measuredValue exceeds threshold by >50% or persists >500 ms
Confidence Scoring and Smoothing Behavior
Given temporal smoothing enabled (e.g., 5–7 frame EMA) and a minimum post-smoothing confidence threshold of 0.60 per joint/side When raw kinematics fluctuate but the smoothed signal remains below risk thresholds Then no risk event is emitted (false positives suppressed) And when any joint/side confidence falls below 0.60 for >1 second Then no risk events of types AngleLimitBreach, AbruptVelocity, or Asymmetry are emitted for that joint/side during the low-confidence interval And the additional latency introduced by smoothing is ≤50 ms p95
Structured Event Schema and Stream Integration with Rep Counter & Timeline
Given an active session with rep counter running and a subscriber connected to the local risk event stream When the engine emits any risk event Then the event payload validates against the schema with required fields: id(UUIDv4), sessionId, repId(nullable), type, severity(Low|Medium|High|Critical), side(Left|Right|Both|NA), joints[], metrics{}, rationale[], timestampMs, frameIndex, latencyMs, confidence, videoTimecodeMs(nullable) And the event is delivered to the subscriber in-order (by timestampMs) within 50 ms and tagged with the current repId when emission occurs during a rep And the same event appears on the session timeline at the correct timestamp (±33 ms) and does not interrupt or double-count reps
Real-time On-device Performance & Offline Operation
Given a typical offline smartphone (e.g., Pixel 5 / iPhone 11 or newer) in airplane mode at 30 FPS video capture When Dual-Arm Risk Detection is running concurrently with the rep counter Then per-frame inference time is ≤33 ms p95 and ≤40 ms p99 And end-to-end risk event latency (frame capture to event emitted) is ≤150 ms p95 and ≤200 ms p99 And dropped-frame rate remains ≤5% and CPU utilization averages ≤60% And no network calls are made; all functionality remains available offline
Configurable Safety Protocols
"As a clinician, I want to configure safety thresholds and fallback rules per patient and exercise so that monitoring reflects individual limitations and protocols."
Description

Enable clinicians to define per-patient and per-exercise safety rules, including risk thresholds, asymmetry tolerances, Range Guard margins, and mapping of risk types to fallback variants. Provide presets by diagnosis and phase of rehab, plus versioned templates for small clinics. Allow in-session override and post-session adjustments with tracked change history. Settings propagate to the detection engine and Safety Sentinel automations without requiring app updates.

Acceptance Criteria
Per-Patient Risk Threshold Configuration
Given a clinician opens a patient’s Safety Protocols, When they enter thresholds for joint torque (Nm), knee valgus angle (deg), and rep velocity variance (%), Then the UI enforces allowed ranges (torque 0–500, valgus 0–30, variance 0–100) and displays units. Given valid entries, When Save is tapped, Then values persist to the patient record and remain after app reload and device reconnect. Given saved thresholds, When a session starts for that patient, Then the detection engine uses these values, evidenced by a simulated input exceeding the threshold triggering a risk event in logs within 1 second. Given patient-specific settings, When switching to a different patient, Then that patient’s own settings or clinic defaults are shown, not the prior patient’s values. Given auditing is required, When thresholds are saved, Then an audit log entry records user, timestamp (UTC), old_value, new_value, and source (web/mobile API).
Per-Exercise Asymmetry Tolerance
Given an exercise with bilateral tracking, When the clinician sets asymmetry tolerance T% for that exercise, Then T must be between 0 and 50 in 1% increments and is stored per-exercise. Given T% is set, When the moving average side-to-side difference over the last 3 completed reps exceeds T, Then a High Asymmetry flag is raised for that exercise within 1 second. Given an active asymmetry flag, When the difference drops below T for 3 consecutive reps, Then the flag auto-clears and the clear event is logged. Given multiple exercises in a session, When switching exercises, Then each exercise applies its own stored T% without carryover. Given reporting, When the session summary is generated, Then asymmetry events include exercise ID, T value, timestamps, and consecutive-rep counts.
Range Guard Margin Configuration and Presets
Given a diagnosis/phase preset (e.g., ACL Phase II) is applied, When viewing the patient’s Safety Protocols, Then Range Guard margin is populated from the preset and is editable. Given Range Guard margin mode is selectable, When the clinician chooses Degrees with value D or % of baseline with value P, Then the engine uses D degrees or P% of the patient’s recorded baseline ROM for that joint/exercise. Given margins are configured, When live motion approaches within the margin for 2 consecutive frames, Then a Range Guard Nearing warning displays; When the hard stop is reached, Then SafeHold triggers and the rep is not counted. Given preset updates, When a clinic admin edits a preset, Then new sessions using that preset reflect the change while in-progress sessions and already-scheduled ones retain previously attached values unless manually refreshed. Given validation, When saving margins, Then negative values or percentages >100 are rejected with inline error.
Risk Type to Fallback Variant Mapping and Auto-Routing
Given mapping UI is available, When the clinician maps risk type “valgus” to fallback variant “wall squat,” Then the mapping saves and is versioned with effective-from timestamp. Given a mapped risk occurs during an exercise, When a “valgus” event is detected, Then Safety Sentinel auto-pauses the current variant within 1 second and switches instructions to the mapped fallback, resuming counting with reps tagged as fallback. Given auto-routing, When the switch occurs, Then a clinician notification is sent within 5 seconds including patient ID, exercise IDs (from→to), risk type, timestamp, and a context clip spanning 5 seconds before to 5 seconds after the event. Given multiple applicable mappings, When conflicts arise, Then the highest-severity mapping (per clinic-defined order) is chosen and the decision rationale is logged. Given session integrity, When auto-routing happens, Then analytics distinguish primary vs fallback reps and exclude risky variant reps from goal tallies unless configured otherwise.
Versioned Clinic Templates
Given a clinic template v1.2.0 exists, When the admin clones and edits parameters, Then a new version v1.3.0 is created with required change notes and author attribution. Given patients are assigned to v1.2.0, When v1.3.0 is published, Then existing patients remain on v1.2.0 until explicitly upgraded; new assignments default to v1.3.0. Given rollback is needed, When the admin selects v1.1.0 and confirms, Then downgrades are scheduled for the next session and blocked mid-session. Given auditability, When exporting template history, Then the export includes version, timestamp (UTC), diff of parameters, impacted patient list, and approver signature if required by clinic policy. Given integrity, When assigning a template to a patient, Then the assigned version stamp is stored with the patient profile and session records for traceability.
In-Session Override and Post-Session Adjustments with Audit Trail
Given an active session, When the clinician overrides a threshold (e.g., +5° Range Guard) for the current exercise, Then the change takes effect within 1 second and is labeled session-scoped. Given an override occurs, When the session ends, Then the clinician is prompted to persist the change to the patient protocol or discard; persisting requires a reason note (min 10 chars) and user confirmation. Given history must be tracked, When viewing patient safety history, Then entries show who, what parameter, before→after values, scope (session/global), timestamp (UTC), device ID, and optional reason. Given reporting, When exporting the session PDF/JSON, Then overrides are included with timestamps aligned to video and detection events. Given permissions, When a user without override rights attempts a change, Then the action is blocked and an access-denied event is logged.
No-App-Update Propagation to Detection Engine and Automations
Given a configuration change is saved in the dashboard, When the target device is online, Then the detection engine receives and applies new parameters within 30 seconds without requiring an app update or app restart. Given the device is offline, When it reconnects, Then queued changes apply before the next set starts and the session header notes “Protocol updated” with version/checksum. Given backward compatibility, When a device on N-1 app version connects, Then it either applies the new parameters or reports a compatibility error listing unsupported fields; in either case the server stores the outcome. Given propagation occurs, When parameters apply, Then both server and device logs include the parameter hash, template/patient version, and application timestamp for traceability. Given safety, When conflicting changes exist (server vs local session override), Then session-scoped overrides take precedence until session end, after which server values resume.
Range Guard Calibration
"As a patient, I want the app to warn me when I’m approaching my unsafe range of motion so that I can stay within my prescribed boundaries."
Description

Calibrate safe joint range-of-motion per patient by capturing baseline comfortable ROM and clinician-prescribed limits, then establish dynamic warning bands for each joint. During sessions, monitor approach to limits and emit graded alerts (approaching vs exceeded) with clear on-screen cues and optional audio prompts. Support left/right independent limits, gradual adaptation based on verified performance, and seamless integration with rep counting and risk event aggregation.

Acceptance Criteria
Baseline ROM Capture and Prescribed Limit Entry
Given a patient and clinician start Range Guard calibration for a specific joint and side, When the patient performs at least 3 comfortable range explorations and capture confidence is ≥ 0.80, Then the system records the peak angle per trial, computes the median peak as Baseline Comfortable ROM, and displays it to the clinician. Given the Baseline Comfortable ROM has been computed, When the clinician enters a safe limit in degrees or accepts the baseline value, Then the value is saved per joint and side, persists to the patient's profile, and is available for the next session. Given limits are saved, When the clinician reviews calibration later, Then the previously saved values are retrievable and editable with audit logging (user, timestamp, old → new). Given capture confidence remains < 0.80 for > 2 seconds during calibration, When baseline computation would otherwise proceed, Then the system prompts to adjust camera or retry and does not save a baseline until confidence recovers.
Dynamic Warning Bands Computation
Given a saved limit L (degrees) for a joint-side, When computing warning bands, Then the Approaching band width W = max(5°, 0.10 × |L|) and Exceeded is defined as measured angle > L. Given the current measured angle A(t), When A(t) remains within [L − W, L] for ≥ 300 ms, Then the state is Approaching; when A(t) > L for ≥ 200 ms, the state is Exceeded. Given state transitions near thresholds, When A(t) oscillates around a boundary, Then 1° hysteresis and a 250 ms debounce are applied to prevent flicker. Given state changes occur, When the UI updates, Then color cues are Green (Safe), Amber (Approaching), Red (Exceeded) and refresh at ≥ 20 Hz.
Graded Alerts and Auto‑Pause During Session
Given the state becomes Approaching, When Approaching persists for ≥ 300 ms, Then an on-screen amber banner with joint and side appears, and, if enabled for Approaching, a soft tone plays once per rep. Given the state becomes Exceeded, When Exceeded persists for ≥ 200 ms, Then the current exercise variant auto-pauses within 500 ms, a red overlay with guidance appears, and the safer fallback variant is recommended within 2 seconds. Given an Exceeded event occurred, When the auto-pause triggers, Then a Range Guard risk event is emitted including joint, side, limit, peak angle, timestamp, and session ID.
Left/Right Independent Limits Application
Given distinct saved limits for Left and Right of the same joint, When a unilateral exercise for the Left side runs, Then the Left limit is applied and the Right limit is not used. Given a bilateral exercise with side-specific tracking, When both sides are measured, Then alerts and states are computed per side independently and displayed without conflation. Given the clinician edits the Right limit during or between sessions, When the session continues, Then the Left limit remains unchanged and active.
Adaptive Limit Proposal Based on Verified Performance
Given clinician-prescribed maximum M and current active limit L for a joint-side, When the patient completes ≥ 2 consecutive sessions with ≥ 10 valid reps per side, zero Exceeded events, and at least 5 reps reaching ≥ 90% of L, Then the system proposes increasing L by min(5°, 5% of L) not to exceed M. Given an adaptation proposal is generated, When the clinician reviews it, Then acceptance is required before L changes; upon acceptance, the new L is saved with audit logging; upon rejection, L remains unchanged and the proposal is archived. Given qualifying performance repeats after a rejection, When criteria are met again, Then a new proposal is generated no more than once per week per joint-side.
Rep Counting and Risk Event Aggregation
Given rep counting is active, When Approaching or Exceeded states occur, Then reps continue to be counted exactly once each; the rep in which Exceeded occurs is labeled with a risk tag without double-counting. Given a Range Guard Exceeded event is emitted, When the event is logged, Then a context video clip of 6 seconds (3 s pre, 3 s post) at ≥ 24 fps is attached and available to the clinician within 30 seconds. Given multiple Range Guard events in a session, When events are aggregated, Then they appear in Safety Sentinel with counts per joint-side and timestamps, and are included in the session export/report.
Audio Prompt Toggle and On‑Screen Cues
Given session settings are open, When configuring audio prompts, Then the user can choose Off, Exceeded Only, or Approaching and Exceeded; the selection persists for the patient and can be overridden per session. Given audio prompts are Off, When Approaching or Exceeded states occur, Then no tones play while all visual cues remain displayed. Given audio prompts are set to Exceeded Only, When Exceeded occurs, Then a distinct alert tone plays once per event and not more than once every 2 seconds.
Auto Pause & Safe Fallback
"As a patient, I want the app to pause a risky exercise and guide me to a safer alternative so that I can continue safely without guessing."
Description

When a high-severity risk event or limit breach occurs, automatically trigger a SafeHold that pauses timers and rep counting, overlays corrective guidance, and routes the user to a clinician-approved safer variant. Preserve session continuity by logging the adaptation, reason, and outcome, and by resuming counts on the new variant without losing analytics. Provide a clear resume/abort flow, with optional countdown and confirmation, and prevent oscillation via cooldown rules.

Acceptance Criteria
Auto SafeHold on High-Severity Risk or Limit Breach
Given an active exercise set with timers and rep counting running And a detected event is high severity or a defined limit breach When the event is raised by the risk detector Then the app triggers SafeHold within 300 ms And pauses the exercise timer and rep counting immediately And no additional reps are recorded after the trigger timestamp And if a SafeHold is already active, no duplicate SafeHold is created
Guidance Overlay and Safe Fallback Routing
Given SafeHold is active for exercise variant V1 When SafeHold engages Then a corrective guidance overlay appears within 500 ms And the overlay names and previews the clinician-approved safer variant V2 And after confirmation or countdown expiry, the system switches to V2 without requiring additional navigation And if no clinician-approved V2 exists, the user is offered Abort with a clear rationale
Resume/Abort Flow with Optional Countdown
Given SafeHold is active and a safer variant V2 is available When the resume/abort overlay is displayed Then a countdown (default 5 s, configurable 3–10 s) is shown And tapping Resume Now immediately starts V2 And tapping Abort ends the set and safely finalizes the session And if no action is taken, auto-switch to V2 occurs at countdown 0 with an audible/haptic cue 1 s prior And all controls are accessible via screen reader and large touch targets
Session Continuity and Analytics Preservation
Given a switch from V1 to V2 due to SafeHold When V2 begins Then the session ID remains unchanged And pre-SafeHold reps and elapsed time remain in session totals And V2 rep count starts at 0, while session totals include V1 + V2 reps And a variant-change marker with timestamp is recorded for analytics And no reps are double-counted in the ±1 second window around the switch
Event Logging and Audit Trail
Given a SafeHold-triggered adaptation occurs When the event is persisted Then the log includes patient_id, session_id, exercise_id, original_variant_id, fallback_variant_id, risk_type, risk_severity, model_confidence, trigger_timestamp, trigger_source, reason_text, cooldown_applied, user_choice, outcome, and app_version And the entry is saved to durable local storage within 200 ms And it is queued and synced to the server upon connectivity with exponential backoff up to 24 h And the clinician feed reflects the event within 60 s of connectivity
Cooldown and Anti-Oscillation Rules
Given a SafeHold switch from V1 to V2 just occurred When additional high-severity events related to V1 happen within the cooldown window Then the app must not switch back to V1 automatically And SafeHold retriggers are rate-limited to at most 1 per 10 seconds And duplicate events from the same posture segment are deduplicated And after 3 SafeHold events within 5 minutes, the user is prompted to Abort and contact the clinician
Offline and Failure Resilience
Given a SafeHold occurs while the device is offline or a fallback asset fails to load When a switch to V2 would occur Then timers and rep counting remain paused until a safe path is established And the guidance overlay and resume/abort flow function without network calls And the adaptation log is queued for later sync And if V2 assets are unavailable, the app selects the next safest available local variant; if none exist, the app maintains pause and recommends Abort
Pain Screener Intercept
"As a clinician, I want the app to capture quick pain ratings when risk events occur so that I can differentiate technique issues from pain-limited performance."
Description

Insert a lightweight, in-session pain check when repeated risk events or threshold conditions occur. Present a single-tap 0–10 pain scale with optional short note, then adapt the session (e.g., reduce reps, switch variant, or suggest rest) based on clinician-configured rules. Capture time-aligned pain data alongside form metrics to distinguish technique issues from pain-limited performance, and operate offline with deferred sync.

Acceptance Criteria
Trigger on Repeated Risk Events
Given an active exercise session with computer-vision form monitoring enabled And clinician config sets intercept_trigger = 3 high-risk flags within 60 seconds When the system detects 3 high-risk flags within 60 seconds for the same exercise variant Then the current exercise is paused within 1 second And the in-session 0–10 pain screener is presented within 2 seconds And at most 1 pain screener is presented per 5-minute window per exercise And the intercept event is logged with timestamp, session_id, exercise_id, variant_id, and risk_event_ids
Inline Pain Screener UI
Given the pain screener is presented Then the user can select an integer 0–10 via single tap And an optional free-text note field is available (max 120 characters) And the Submit button remains disabled until a score is selected And the interaction is completable in ≤2 taps (score + submit) And accessibility labels exist for each score and controls meet WCAG AA contrast And median render-to-interact time on target devices is ≤500 ms
Rule-Driven Session Adaptation
Given clinician-configured rules exist: - score ≥ 7: pause exercise, suggest 2-minute rest, mark SafeHold - score 4–6: switch to safer variant B and reduce remaining reps by 50% - score 1–3: keep variant and reduce remaining reps by 25% - score = 0: continue unchanged When the patient submits score S Then the matching rule is applied within 2 seconds And the patient sees a rationale message that cites the rule name within 2 seconds And the session log records applied_rule_id, prior_plan, adapted_plan, and user-visible message
Time-Aligned Pain Data Capture
Given an intercept occurs at timestamp T during exercise X When the patient submits score S with optional note N Then a record is stored with: screener_id (UUID), session_id, exercise_id, variant_id, score S, note N, submit_timestamp, window_start = T - 10s, window_end = T + 10s, associated risk_event_ids, and rep_indices overlapping the window And the record is visible in the clinician dashboard timeline aligned to T within 5 seconds when online And the record includes a patient-visible confirmation and an audit trail entry
Offline Intercept with Deferred Sync
Given the device has no network connectivity When the pain screener is triggered and the patient submits a score and optional note Then the adaptation rules execute locally without server dependency And the record is written to encrypted local storage with a unique id and retry metadata And upon connectivity restoration, the record syncs to the server within 60 seconds And the server acknowledgement marks the local record as synced And retries are idempotent to prevent duplicate server records
Clinician Notification on High Pain Intercept
Given clinician notifications are enabled for score ≥ 7 When a patient submits a score ≥ 7 via intercept Then a clinician notification is sent within 60 seconds containing patient_id, session_id, exercise_id, pain_score, applied_rule_id, and a 10-second pre/post context clip And notifications are throttled to at most 1 per 10 minutes per patient And PHI is transmitted over encrypted channels and delivery status is logged
Error Handling and Fallback for Screener
Given the pain screener fails to render due to an app error When the failure is detected Then the app retries once within 2 seconds And if still failing, presents a minimal numeric selector (0–10) fallback within 3 seconds And the session remains paused until input is provided or a 60-second timeout elapses And an error event with diagnostic details is logged And the patient can resume the session after input or timeout
Context Clip Clinician Alerts
"As a clinician, I want to receive concise alerts with context clips when a patient’s safety is at risk so that I can intervene promptly with informed guidance."
Description

Generate timely clinician alerts that include an 8–10 second context clip before/after the event, annotated with the fired rule, kinematic traces, and Range Guard status. Deliver via dashboard notifications and secure messaging, with throttling and batching to avoid alert fatigue. Obtain and store patient consent, encrypt media at rest/in transit, and provide optional face blurring. Link alerts to the patient’s session timeline for fast review and telehealth follow-up.

Acceptance Criteria
Annotated Context Clip Content
Given a Safety Sentinel rule fires during a patient session When the alert is generated Then the attached video clip duration is between 8 and 10 seconds inclusive And the event timestamp falls within the middle 40–60% of the clip duration And the clip overlay includes the fired rule name and identifier And the overlay displays kinematic traces for tracked joints relevant to the rule And the overlay displays the Range Guard status at the event frame, including current value and threshold And the clip includes at least 2 seconds of footage preceding the event when available
Timely Dashboard Notification Delivery
Given a rule-based event occurs for a clinician’s assigned patient and the device has connectivity When the alert is processed Then a dashboard notification is visible to that clinician within 60 seconds of the event time And the notification includes patient ID, rule name, event time, severity, and a playable thumbnail of the annotated clip And opening the notification auto-plays the annotated clip in the dashboard viewer And the notification deep-links to the patient’s session timeline at the event timestamp
Secure Messaging Delivery
Given a clinician has secure messaging enabled and a rule-based event occurs When the alert is processed Then a secure message is delivered to the clinician’s inbox within 2 minutes of the event time And the message body excludes unredacted PII beyond patient initials and ID And the message link requires an authenticated, authorized session to access the clip And if connectivity is unavailable, the message is queued and sent within 2 minutes after reconnection
Alert Throttling and Batching
Given multiple rule-based events occur for the same patient within a short period When notifications are generated Then no more than 1 dashboard notification is sent per 2-minute rolling window per patient And additional events within the throttle window are batched And a batch summary notification is sent at the end of a 5-minute batch window or session end, whichever comes first And the batch summary lists total events, distinct fired rules, and includes the highest-severity clip with links to all constituent clips
Patient Consent Capture and Storage
Given a patient has not provided media capture consent When they start a session that can generate context clips Then the app presents a consent screen describing capture, storage, sharing, and privacy controls And context clip capture and upload are disabled until consent is granted Given consent is granted Then a consent record is stored with patient ID, timestamp, policy version, and locale, and is visible to clinicians Given consent is revoked When revocation is confirmed Then new clips are not captured or uploaded and the system records the revocation timestamp and reason
Media Security and Privacy Controls
Given a context clip is stored in the system Then the media file is encrypted at rest with AES-256 or stronger and keys managed by the platform KMS And all media and annotation transfers use TLS 1.2 or higher And access requires authenticated authorization scoped to the patient; unauthorized requests are denied and audited Given face blurring is enabled for the patient or clinic When a clip is prepared Then all detected faces are blurred on-device before upload and overlays remain legible And disabling face blurring produces an unblurred clip while maintaining encryption and access controls
Session Timeline Linking and Telehealth Follow-up
Given an alert is created for a session When the clinician opens the patient’s session timeline Then a marker appears at the exact event timestamp And selecting the marker plays the annotated clip within the timeline context And the alert detail view shows preceding and subsequent events within ±5 minutes And the clinician can initiate a telehealth follow-up from the alert detail that auto-links the session and event in the visit note
Auditable Safety Log
"As a compliance coordinator, I want an auditable safety log with exports so that our trial remains analyzable and meets regulatory requirements."
Description

Maintain an immutable, time-stamped log of all safety-relevant events, thresholds in effect, model versions, clinician configurations, overrides, and resulting adaptations. Support HIPAA-compliant storage, role-based access, retention policies, and export to CSV/JSON for clinical trials and audits. Expose APIs and dashboard filters to analyze frequency, severity, and outcomes across patients, exercises, and time windows.

Acceptance Criteria
Immutable Time-Stamped Log Entries
Given Safety Sentinel is running and a safety-relevant event occurs When the system records the event Then it writes a new append-only log entry containing at minimum: event_id (UUIDv4), event_type, occurred_at (UTC ISO 8601 ms), written_at (UTC ISO 8601 ms), patient_ref (pseudonymous), exercise_id, session_id, severity, threshold_id, threshold_value, model_version, clinician_config_version, action_taken, actor, correlation_id, prev_hash, entry_hash (SHA-256) And any attempt to modify or delete an existing entry is rejected with HTTP 409 and a security event is logged And the tamper-check endpoint returns integrity=OK for unaltered logs and pinpoints the first mismatched entry if any hash-chain break is detected
Comprehensive Safety Event Capture
Given high-risk flags, pain screener submissions, Range Guard nearing events, and SafeHold activations occur during a patient session When these events are detected Then a log entry is created for each within 200 ms of detection And each entry includes active thresholds in effect, matched rule_id, measured metrics, and resulting adaptation (pause=true/false, fallback_variant_id, notification_sent=true/false) And any clinician override (resume, disable_rule, dismiss_alert) is logged with user_id, reason, and timestamp And in a seeded synthetic stream test of 10,000 events, the missed-event rate is ≤ 0.1% and out-of-order write skew is ≤ 50 ms
HIPAA-Compliant Role-Based Access
Given users with roles Clinician, Org Admin, and External Auditor request access to the safety log When access is evaluated Then RBAC enforces: Clinician (own patients only), Org Admin (org-wide), External Auditor (de-identified only) And PHI is encrypted at rest (AES-256) and in transit (TLS 1.2+) And de-identified views remove names and exact DOB and replace patient_ref with a stable, non-reversible hash And all access attempts are logged with user_id, role, purpose_of_use, and result And after 3 failed auth attempts within 15 minutes, the token is locked for 15 minutes And the access-control test suite passes 100% across positive and negative cases
Retention, Purge, and Legal Hold
Given an org retention policy of 7 years and support for legal holds When entries exceed retention and are not on legal hold Then a scheduled purge deletes qualifying entries and writes a purge receipt with counts and time ranges And legal holds freeze specified patient_refs or time windows until released And hash-chain integrity remains valid for retained ranges using boundary anchoring hashes And restoration from an encrypted backup preserves legal holds and purge receipts
Filterable CSV/JSON Exports
Given a permitted user requests an export with filters for patients, exercises, severities, and a time window When the export is generated Then CSV and JSON outputs include exactly the filtered records with a declared schema_version and stable column/field order And exports >200,000 rows are chunked with deterministic pagination tokens and a manifest describing chunk sizes and hashes And all timestamps are UTC ISO 8601 with millisecond precision and numeric fields are correctly typed And the manifest row_count equals the sum of rows across all chunks And exports are retained for 7 days via an export_id and then expire
Analytics APIs and Dashboard Filters
Given the analytics API and dashboard are queried with group_by (patient, exercise, day/week) and metrics (event_count, unique_patients, severe_rate, adaptation_rate) When queries run on a 30-day org dataset Then 95th-percentile response time is ≤ 2,000 ms And results match a validation dataset within ±0.5% for counts and rates And dashboard filters persist in the URL and are shareable And the API supports pagination, time-zone selection for bucketing, and idempotent request IDs
Context Clip Linkage
Given a safety event has an associated context clip When an authorized clinician opens the event detail Then the log entry provides a secure signed URL with TTL ≤ 24 hours to the clip stored in HIPAA-compliant storage And the clip redacts non-patient faces and includes a burned-in event_id and occurred_at watermark And clip access is audited and linked back to the log entry And if the clip is unavailable, a failure_reason is logged and pre/post timestamps are provided for regeneration

Rapid Rollout

Promote the winner to a clinic standard in one tap with staged rollout options (by clinician, diagnosis, or site). Auto-migrate eligible patients at their next phase gate, preserve version history, and update default cues (Tempo Coach, Cue Tuner) to match the winning template. Turns evidence into everyday practice without creating admin debt.

Requirements

One-Tap Promotion & Staged Rollout Composer
"As a clinic admin, I want to promote a winning protocol with staged targeting so that I can standardize care safely and minimize rollout risk."
Description

Enables an admin to promote a winning protocol template to clinic standard in a single action while configuring staged rollout parameters (by clinician, diagnosis, site, percentage waves, and start dates). Provides pre-flight validation of scope and eligibility, impact simulation (patients and clinicians affected), and a confirmation summary. On commit, creates a rollout plan artifact that orchestrates downstream updates and exposes progress tracking. Integrates with org directory, diagnosis catalogs, and template library.

Acceptance Criteria
One-Tap Promotion with Staged Parameters
Given I am an org admin with permission to manage clinic standards and a winning protocol template is selected And I have configured staged rollout parameters (target clinicians/diagnoses/sites, percentage waves, and start dates) When I tap "Promote to Clinic Standard" and confirm Then exactly one rollout plan is created and queued with a unique ID and the exact parameters I configured And duplicate taps or retries within 60 seconds do not create additional plans (idempotent by client token) And no patient migrations occur before the first wave’s start time and the patient’s next phase gate
Pre-Flight Validation and Impact Simulation
Given I have entered targeting filters and wave schedule When I run Pre-Flight Validation Then the system returns within 3 seconds a simulation with counts of eligible clinicians and patients overall and by site and diagnosis And the simulation lists up to 20 example patients and clinicians impacted And ineligible selections (e.g., deactivated clinicians, unsupported diagnoses, missing template mappings) are flagged with reasons and counts And the Confirm action remains disabled until all critical errors are resolved; warnings allow proceed with explicit acknowledgment
Confirmation Summary Snapshot
Given pre-flight passes without critical errors and I proceed to confirmation When the confirmation summary is displayed Then it shows the resolved targeting scope (clinician/site/diagnosis names), wave percentages and start dates, counts of eligible patients/clinicians, and identified warnings And it includes the winning template name and version ID from the template library And on confirm, the exact snapshot of this summary is stored with the rollout plan and is non-editable thereafter
Rollout Plan Artifact Creation and Immutability
Given I confirm the promotion When the rollout plan is created Then the system persists a rollout plan artifact with fields: id, createdAt, createdBy, templateVersion, targeting filters (clinicians/diagnoses/sites), wave schedule, eligibility rules, and status=Scheduled And the artifact is retrievable via UI and API endpoint /rollouts/{id} And attempts to edit targeting or wave schedule after creation are blocked; only pause/resume/cancel are allowed and are audit-logged
Auto-Migrate at Next Phase Gate with Version History
Given a patient within the targeted scope enters a wave’s effective window And the patient reaches their next phase gate When the migration job runs Then the patient’s assigned protocol switches to the promoted template version within 15 minutes And their prior assignment is recorded in version history with timestamps and the rollout plan ID And exercise history, completion data, and clinician notes remain unchanged
Default Cue Synchronization to Winning Template
Given a patient is migrated by the rollout plan When the new protocol becomes active Then default cues (Tempo Coach, Cue Tuner) are updated to match the winning template’s defaults And any explicit clinician overrides on that patient’s plan are preserved and not overwritten And the cue change is visible in the audit trail with fields: previous value, new value, source=RolloutPlan, timestamp
Progress Tracking and Wave-Level Metrics
Given a rollout plan exists with active waves When I open the rollout progress view Then I can see per-wave metrics: scheduled patients, migrated, pending (awaiting phase gate), failed migrations, and skipped (ineligible at runtime) And these metrics update at least every 5 minutes and reflect the last sync time And clicking a metric reveals a downloadable list (CSV) of the underlying patients/clinicians with reasons for failure or skip
Eligibility Rules & Cohort Targeting Engine
"As a product owner, I want precise eligibility rules so that only appropriate patients and clinicians are included in the rollout."
Description

Determines eligible patients and clinicians for migration using rule-driven criteria (diagnosis codes/tags, current phase, contraindications, personalization flags, site affiliation, and clinician assignment). Supports inclusions/exclusions, manual overrides, and saved cohort definitions. Ensures ineligible patients retain their current plans. Exposes a reusable eligibility API for the rollout composer and analytics.

Acceptance Criteria
Rule-Driven Eligibility Evaluation
Given a cohort definition with inclusion rules (diagnosis codes/tags, current phase, site affiliation, clinician assignment) and exclusion rules (contraindications, personalization flags) When EvaluateCohort is called with the cohort definition and an as-of timestamp Then the API returns only patients who satisfy all inclusion rules and none of the exclusion rules And each returned patient includes matched rule identifiers And no patients outside the rule-set are included And the call returns HTTP 200 with a schema-valid payload
Manual Override Precedence
Given patients with stored manual Include or Exclude overrides and a cohort definition that would otherwise produce a different outcome When EvaluateCohort is executed Then manual overrides take precedence over rule outcomes for those patients And the response marks overridden patients with override metadata (actor, timestamp, reason) And non-overridden patients follow standard rule evaluation
Saved Cohort Definitions Versioning and Reuse
Given a cohort definition is saved as version N with a unique ID and immutable rule set When the same definition ID and version N are used to evaluate eligibility at the same as-of timestamp against the same snapshot of patient data Then the resulting eligible patient set is identical across repeated runs And updating the definition creates version N+1 without mutating version N And ListSavedCohorts returns both versions with correct metadata (creator, createdAt, version, status)
Ineligible Patients Retain Current Plans
Given an ineligible patient under the evaluated cohort definition When the Eligibility API is invoked Then no mutations are made to the patient’s current care plan or assignments by the eligibility service And the API response marks the patient status as ineligible with reason codes And no write operations or plan-change events are emitted by the eligibility service
API Contract, Security, and Performance
Given a request to POST /eligibility/evaluate with a cohort definition targeting up to 50,000 patients When processed under normal load Then p95 latency is ≤ 2000 ms and p99 latency is ≤ 5000 ms And the response includes totals: eligibleCount, ineligibleCount, overrideCount And responses are paginated with stable cursors and a configurable max page size up to 1000 And all endpoints require OAuth2 scope "eligibility.read" and reject unauthorized requests with HTTP 401/403
Conflict Resolution and Deterministic Ordering
Given a patient matches multiple rules with conflicting outcomes across inclusion and exclusion sets When eligibility is computed Then precedence is applied as: manual override > exclusion rules > inclusion rules And the final outcome and applied precedence are returned per patient And patient records in each page are deterministically ordered by patientId ascending unless dictated by the pagination cursor
Analytics Explainability Mode
Given EvaluateCohort is called with explain=true When the response is generated Then each patient record includes an explanation array with entries per rule: ruleId, type (include|exclude|override), matched (true|false) And the response includes aggregate breakdowns by reason code and ruleId And when explain=false (default), explanations are omitted to minimize payload size
Phase-Gate Auto-Migration Service
"As a clinician, I want patients to switch at their next phase gate automatically so that care stays consistent without disrupting active sessions."
Description

Automatically migrates eligible patients to the new standard at their next phase gate, preserving history and outcomes while mapping exercises to their updated equivalents. Runs as an idempotent background service with retry logic, conflict detection (e.g., active clinician edits), and scheduling windows to avoid session disruption. Emits events for notifications and audit, and supports graceful skip/deferral per patient.

Acceptance Criteria
Phase Gate Auto-Migration Trigger and Scope Eligibility
Given a patient assigned to a plan within a staged rollout scope (clinician, diagnosis, or site) and marked eligible And a phase gate is reached for that patient And clinic migration windows are configured When the service detects the phase gate event Then the service schedules migration for the next allowed migration window and completes it within 10 minutes of the window start And the patient is not migrated if a clinician or patient opt-out flag is present And patients outside the rollout scope are not migrated
Data Preservation and Version History
Given a patient with existing plan version, exercise history, and outcome logs When the service performs an auto-migration Then all pre-migration exercise logs and outcomes remain intact and queryable And a new plan version is created with a linkage to the prior version and timestamped actor "Auto-Migration Service" And pre-migration reports for any date range return identical totals to those before migration And no historical entries are deleted or altered
Exercise Mapping and Cue Defaults Update
Given a mapping table from legacy exercises to updated equivalents in the winning template When the service migrates a plan Then each legacy exercise maps to its updated equivalent while preserving compatible parameters (e.g., reps, sets, load) where possible And if no mapping exists, the exercise remains in the plan marked "unmapped" for clinician review without blocking migration And template default cues (Tempo Coach, Cue Tuner) are applied unless clinician overrides exist, in which case overrides are preserved And a mapping summary is recorded for audit
Idempotent Processing and Retry Logic
Given a migration job for a specific patient and phase gate with a correlation ID When the job is executed multiple times due to retries or duplicate triggers Then exactly one migrated plan version exists for the patient for that gate And no duplicate events, exercises, or plan artifacts are created And transient failures trigger exponential backoff retries up to 5 attempts before marking the job as failed with an error reason And a subsequent manual retry results in a single correct final state
Conflict Detection and Graceful Deferral
Given a patient whose plan is actively being edited by a clinician or is currently in an active session Or the patient has a per-patient deferral/skip flag set until the next phase gate When a migration becomes due Then the service defers or skips the migration without applying partial changes And records the deferral/skip reason and schedules the next attempt after the edit lock/session ends or at the next gate And no user-facing disruption occurs
Scheduling Windows and Session Safety
Given clinic-configured migration windows (with timezone awareness) And a patient due for migration outside those windows or during an active session When the service evaluates execution time Then the migration is queued and executed only within an allowed window and never during an active session And session start/end signals prevent mid-session changes to the active plan
Event Emission and Auditing
Given the service attempts a migration When a migration is attempted, succeeds, is deferred, is skipped, or fails Then an event is emitted to the event bus with patient_id, plan_id, from_version, to_version, correlation_id, status, reason, timestamp, and mapping_summary And events are delivered at-least-once with a deduplication key equal to the correlation_id And an immutable audit record is persisted with before/after snapshots and is queryable by patient and date range
Template Versioning & Audit Trail
"As a compliance officer, I want complete version history and audit logs so that we can verify what changed, when, and why."
Description

Maintains full version history of templates and rollouts, including semantic versioning, diffs of exercises/cues/parameters, initiator identity, timestamps, and rationale. Locks promoted versions, tracks rollout states per cohort, and records every auto-migration event for regulatory and quality review. Provides exportable audit logs and links from patient timelines to the template version in force at the time.

Acceptance Criteria
Enforce Semantic Versioning on Template Updates
Given an existing template with version X.Y.Z, when a user saves changes classified as: (a) exercise list added/removed/reordered or exercise type changed -> MAJOR; (b) exercise parameter values (sets, reps, tempo, ranges, default cue mappings incl. Tempo Coach/Cue Tuner) changed -> MINOR; (c) text-only edits to descriptions/notes -> PATCH; then the new version number is incremented accordingly and is unique within the template namespace. Given any previously created version, when it is queried or referenced, then its content is immutable; attempts to update it return 409 Conflict with guidance to create a new version. Given concurrent attempts to create new versions, when two updates race, then only one receives the next version number; the other is retried and persisted with the subsequent valid version number without data loss.
Lock Promoted Template Versions
Given a template version is promoted to clinic standard, when any user attempts to edit it in the UI, then all fields are read-only and only 'Create New Version' is available. Given an API client attempts to update a promoted version, when the request is processed, then it fails with 423 Locked and an audit record is appended. Given a need to modify a promoted template, when 'Create New Version' is invoked, then a new version is created using the appropriate semantic version increment and the promoted version remains unchanged.
Show Accurate Diffs Between Template Versions
Given two template versions are selected, when a diff is requested, then the response lists exercises added/removed/reordered, parameter changes (before → after), and cue text changes with inline highlights for each changed field. Given the two versions have no differences, when a diff is requested, then the system returns 'No changes' with HTTP 200 and no false positives. Given a template with up to 50 exercises and 20 parameters per exercise, when a diff is requested, then the result is computed and rendered within 2 seconds in both UI and API, and the counts of additions/removals/changes sum to the total differences observed.
Capture Complete Audit Trail and Support Export
Given any template or rollout action (create, edit, promote, rollback, rollout start/pause/resume/complete/cancel, auto-migration), when the action occurs, then an audit record is appended with fields: template_id, version, action_type, initiator_id, initiator_role, timestamp (UTC ISO8601), rationale (required for promote/rollback), cohort scope (if applicable), and request_id/trace_id. Given a promote or rollback action, when the user submits without a rationale, then the action is blocked with validation error and no state change is persisted. Given audit retrieval filters (date range, template/version, action_type, initiator, cohort), when applied, then only matching records are returned and counts reflect the filtered set. Given an export request, when up to 50,000 audit records match, then CSV and JSON exports are generated within 60 seconds, include a SHA-256 checksum, adhere to the documented schema, and exactly match the filtered in-app results. Given audit storage, when users attempt to modify or delete existing records, then the system prevents changes (append-only) and logs the attempt.
Track Rollout State per Cohort
Given a staged rollout is configured by clinician, diagnosis, or site, when rollout starts, then each cohort is assigned a lifecycle state from {Not Started, In Progress, Paused, Completed, Cancelled} and transitions are limited to: Not Started→In Progress; In Progress→Paused/Completed/Cancelled; Paused→In Progress/Cancelled. Given cohort tracking, when queried via UI or API, then the system returns eligible_count, migrated_count, pending_count, failed_migration_count, current_state, and last_updated_utc for each cohort. Given cohort membership changes mid-rollout, when definitions are edited, then the original membership at rollout start is snapshotted for audit, and deltas are recorded as separate audit events without retroactively altering past counts.
Record Auto-Migration Events at Phase Gates
Given a patient in an eligible cohort reaches the next phase gate, when migration criteria are satisfied, then the system switches the patient to the target template version and appends an auto-migration audit record containing patient_internal_id/pseudonymous_id, from_version, to_version, cohort_id, phase_gate_id, timestamp_utc, and trigger_reason. Given the patient is already on the target version, when the phase gate is crossed, then no change is made and an idempotent no-op audit record is appended. Given a migration cannot proceed due to a clinical hold or validation failure, when the phase gate is crossed, then the system records a failed migration audit entry with reason and the patient remains on the prior version.
Patient Timeline Links to In-Force Template Version
Given any patient activity or migration event is displayed on the timeline, when the entry is rendered, then it shows the template version in force at that timestamp with a link that opens a read-only snapshot of that exact version. Given the linked template version is archived or superseded, when the link is followed, then the snapshot loads successfully and matches the audited version contents byte-for-byte. Given timeline entries are cross-checked against the audit log, when 100 random entries are sampled, then the version identifiers match the corresponding audit records 100% of the time.
Cue Synchronization with Tempo Coach & Cue Tuner
"As a therapist, I want cues to update with the new standard automatically so that patients receive the right guidance without extra setup."
Description

Updates default Tempo Coach cadence and Cue Tuner prompts to match the promoted template while preserving patient- or clinician-level overrides. Performs locale and accessibility checks (audio/text), validates cue availability per exercise, and falls back gracefully to prior defaults when needed. Provides a verification report before rollout and ensures cues are active upon migration.

Acceptance Criteria
Template Promotion Sync Preserves Overrides
Given a winning exercise template is promoted to clinic standard with defined Tempo Coach cadence and Cue Tuner prompts And at least one patient-level override and one clinician-level override exist for affected exercises When the promotion is applied via Rapid Rollout Then default Tempo Coach cadence and Cue Tuner prompts are updated to match the promoted template for all non-overridden patients and exercises And all patient-level and clinician-level overrides remain unchanged And the change is recorded with timestamp, actor, template version, and affected population counts in the audit log And version history for each exercise reflects previous and new default values
Locale and Accessibility Compliance Checks
Given the clinic has patients with locales en-US and es-ES and accessibility flags for audio and text When pre-rollout validation runs Then 100% of mapped locales have available audio (file or TTS) and text cue variants for each exercise in the promoted template And voice rate and cadence units are localized per locale definitions And any missing locale-asset pair is listed in the verification report with severity=Blocker and affected counts And rollout is blocked until all Blockers are resolved or explicit fallbacks are configured
Per-Exercise Cue Availability and Fallback Handling
Given some exercises in the promoted template lack a cue asset for a specific locale or modality (audio or text) And prior clinic defaults exist for those exercises When rollout executes Then the system applies the prior clinic default for each missing asset without altering any overrides And the verification report marks each fallback as severity=Warning and identifies exercise, locale, modality, and fallback source And rollout proceeds only if fallback coverage achieves 100% across the target cohort And no patient receives an empty or null cue
Pre-Rollout Verification Report
Given an admin taps Promote to Clinic Standard and selects a staged rollout filter (by clinician, diagnosis, or site) When the verification report is generated Then the report includes sections for: locale/accessibility coverage, per-exercise cue diffs, override counts preserved, fallback plan, predicted migration counts by stage, and blocking issues And the report generates within 5 seconds for cohorts up to 5,000 patients and within 20 seconds for up to 50,000 patients And the report can be exported to PDF and CSV and shared via link with access control And the admin can approve to proceed only when no Blockers remain
Phase-Gate Migration Activates Cues
Given eligible patients are auto-migrated at their next phase gate during a staged rollout When a patient is migrated Then the new default Tempo Coach cadence and Cue Tuner prompts become active within 2 minutes on the patient device And a migration event with patient ID, timestamp, source template version, destination template version, and cue activation status=Success is emitted And clinicians see the updated defaults reflected in their dashboard within 1 minute And if delivery fails, the system retries up to 3 times with exponential backoff and surfaces an alert in the clinician dashboard
Override Precedence and Version History
Given patient-level and clinician-level cue overrides exist alongside clinic defaults When conflicts are evaluated during promotion and migration Then precedence is patient-level override > clinician-level override > clinic default And no override value is overwritten by the promoted defaults And version history retains both prior and new default entries with diff summaries and rollback capability to the prior default in one action And a read-only snapshot of overrides at the time of rollout is stored for audit
Rollback, Holdouts, and Canary Controls
"As a clinical lead, I want canary and rollback controls so that I can stop or limit rollout if outcomes worsen."
Description

Provides immediate rollback to the prior standard, configurable holdout cohorts, and percentage-based canary waves. Includes a kill-switch that halts migrations and reverts upcoming phase-gate transitions. Captures outcome and safety signals (adherence drops, error rates) to inform go/no-go decisions, and supports cohort comparison reporting during rollout.

Acceptance Criteria
Immediate Rollback to Prior Standard
Given a clinic has an active standard and a newer standard has been promoted When an authorized admin clicks Rollback for a defined scope (clinic, site, diagnosis, clinician) Then the prior standard becomes the active standard for that scope within 60 seconds And all in-flight migrations are halted and any queued migrations are canceled And patients already migrated continue their current phase, but their next phase-gate uses the prior standard And Tempo Coach and Cue Tuner revert to the prior standard’s defaults for the affected scope And a version history entry is created with actor, timestamp, scope, previous/next version IDs, and rollback reason
Configurable Holdout Cohorts
Given an admin needs to exclude a cohort from rollout When the admin defines a holdout by clinician, diagnosis (ICD/SNOMED), site, patient tag, or random percentage and saves Then members of the holdout are excluded from migrations and remain on their current standard And the holdout configuration is versioned with effective start/end times and change owner And the rollout dashboard surfaces holdout counts and composition And removing a holdout resumes migrations only at the patient’s next phase gate, never mid-phase
Percentage-Based Canary Waves
Given a target rollout population is identified When the admin configures canary waves with percentage steps (e.g., 5%, 10%, 25%, 50%, 100%) and a cadence (manual or time-based) Then only eligible patients encountering a phase gate during that wave are migrated up to the wave’s cap And no retroactive migrations occur for patients not hitting a phase gate during the wave window And promotion to the next wave can be automatic when predefined success criteria are met (e.g., adherence non-inferior within 2% absolute and form error rate increase <2% absolute over 7 days with ≥50 patients per cohort) And wave progress (eligible, migrated, remaining) and timing are displayed in the rollout dashboard
Kill-Switch Halts and Reverts Phase-Gate Transitions
Given a rollout or canary is active When an authorized user toggles the kill-switch Then initiation of any new migrations stops within 5 seconds And any queued or scheduled phase-gate transitions are reverted to the prior standard And the system displays a visible Halted state banner in admin views and disables rollout actions except Resume and Rollback And notifications are sent to configured channels (email/in-app) with scope and reason And the event is audit-logged with actor, timestamp, scope, and outcome
Outcome and Safety Signal Capture
Given control (prior standard) and treatment (new standard) cohorts exist during rollout When adherence, rep counts, form error detections, and alerts are generated Then the system computes per-cohort daily metrics (adherence %, average reps completed, form error %, alert rate) and stores time series at 1-hour granularity for 90 days And threshold breaches (adherence drop ≥5% absolute or form error increase ≥2% absolute sustained for 24 hours) trigger an alert and optional auto-halt according to configuration And metrics exclude sessions with missing essential data from denominators, with exclusions reported And signals are available in the dashboard and via API endpoints with documented schemas
Cohort Comparison Reporting During Rollout
Given a rollout is in progress with defined control and rollout cohorts When a user opens the Cohort Comparison report and applies filters (site, clinician, diagnosis, date range) Then the report shows side-by-side KPIs (adherence %, avg reps, form error %, alert rate) with cohort sizes And statistical annotations (95% CIs and p-values via two-proportion z-test for rates) are shown when each cohort has ≥30 patients; otherwise, a sample size warning is displayed And results refresh within 15 minutes of new data arrival and can be exported to CSV and PDF And the report includes clear cohort definitions and the active rollout/holdout/wave context
Change Communications & Consent Workflow
"As a clinic admin, I want automated notifications and required sign-offs so that stakeholders are informed and compliant before changes take effect."
Description

Delivers in-app and email notifications to clinicians and patients about upcoming plan changes, including effective dates and rationale. Requires clinician sign-off when material changes occur and supports patient consent capture where mandated. Logs acknowledgments, provides templated messages per diagnosis/site, and integrates with the audit trail for traceability.

Acceptance Criteria
Clinician Material Change Sign-Off Gate
Given a rollout includes material changes and a clinician has impacted patients When the change is scheduled Then the clinician receives an in-app notification and email within 5 minutes containing change summary, rationale, effective date/time, impacted patient count, and a review link Given the clinician opens the review When they sign off Then the rollout for their patients is unblocked and the sign-off is recorded with timestamp, user ID, and version ID in the audit trail Given the clinician has not signed off by 24 hours before the effective date When the deadline passes Then their patients are excluded from auto-migration and an escalation notification is sent to the site lead Given the clinician declines the change When they submit a decline reason Then rollout is paused for that clinician’s patient panel and the reason is logged in the audit trail
Patient Consent Capture (Mandated Jurisdictions)
Given a patient is in a consent-mandated jurisdiction and has contact information on file When a plan change is scheduled Then the patient receives an email and in-app prompt within 10 minutes containing plain-language rationale, effective date, and a consent call-to-action Given the patient opens the prompt When they provide consent Then an explicit consent record (e-sign or affirmative action) is stored with timestamp, device, IP, and template version, and a confirmation receipt is sent Given the patient has not responded by 24 hours before the effective date When the cutoff is reached Then the patient is excluded from auto-migration and their clinician is notified to follow up Given the patient declines When decline is recorded Then rollout is blocked for that patient and the decline event is logged with reason (if provided)
Templated Messages by Diagnosis/Site
Given message templates exist per diagnosis and site with variables {patient_first_name}, {effective_date}, {rationale}, {template_name} When a rollout is created Then the system auto-selects the matching template based on the patient’s diagnosis and site Given a template is selected When notifications are generated Then all variables are replaced accurately and content matches the saved preview exactly Given no template exists for a diagnosis/site When generation occurs Then the default template is used and a missing-template warning is logged with diagnosis/site details Given a template is edited When it is saved Then a new template version is created with editor, timestamp, and diff, and subsequent messages reference the new version
Delivery, Bounce Handling, and Retries
Given an email notification is queued When delivery is attempted Then SMTP response codes are captured and the status is updated to Sent, Bounced, or Deferred Given a notification is Deferred When retry policy runs Then the system retries up to 3 times over 30 minutes and records each attempt outcome Given an email Bounce occurs When final failure is determined Then an in-app notification is queued for the recipient and the clinician/site lead is alerted for affected patients Given a push/in-app notification is sent When the user next opens the app Then the notification is displayed until dismissed or actioned, and the view event is timestamped
Audit Trail and Acknowledgment Logging
Given any notification, sign-off, or consent event occurs When the event is persisted Then the audit record includes actor ID, role, patient ID, clinic/site, diagnosis, template version, rollout ID, timestamp (UTC), channel, and outcome Given an auditor filters by rollout ID When they export events Then a CSV containing all related events is generated within 30 seconds and totals match the on-screen count Given a clinician signs off or a patient consents/declines When viewing the patient timeline Then an immutable acknowledgment entry appears with event details and links to the audit record
Effective Date and Phase-Gate Scheduling
Given a rollout has an effective date and uses phase-gate migration When a patient reaches the next phase gate after the effective date Then notifications are sent at least 24 hours prior per configuration, or immediately if entered within 24 hours Given a patient reaches a phase gate earlier than scheduled and lead time cannot be met When migration is evaluated Then clinician override is required before migration proceeds and is logged upon approval Given rollout scoping by clinician, diagnosis, or site is configured When eligibility is computed Then only impacted patients receive communications and are included in counts shown to admins/clinicians
Security, Privacy, and Accessibility in Communications
Given a notification is rendered When viewed on supported devices Then it meets WCAG 2.1 AA contrast and supports dynamic text scaling to 200% with readable layout and screen-reader labels Given an email is generated When sent Then the subject line contains no PHI and the body includes minimum necessary PHI only, per policy and jurisdictional rules Given the app language is set to Spanish and a Spanish template exists When notifications are sent Then the Spanish template is used; otherwise English is used and a locale fallback is logged

Criteria Mapper

Auto-aligns each report to the payer’s medical‑necessity checklist. Pulls adherence, form‑quality, pain screeners, and safety events into the exact fields payers require, flags gaps with quick fixes, and generates a clear cover summary that checks every box—reducing back‑and‑forth and speeding approvals.

Requirements

Payer Criteria Schema Library
"As a billing coordinator, I want an up-to-date library of payer medical-necessity criteria by plan so that our reports always match current requirements without manual lookups."
Description

Centralized, versioned library of payer-specific medical-necessity checklists with jurisdiction, plan, and effective-date metadata. Supports conditional rules (e.g., visit count thresholds, diagnosis qualifiers, objective-improvement criteria), required evidence types, acceptable date ranges, and measurement units. Includes an admin interface for updates, automated sync to propagate schema changes without breaking existing cases, and backward compatibility for in-flight submissions. Normalizes field definitions and provides validation rules consumable by downstream mapping, validation, and rendering components.

Acceptance Criteria
Versioned Schema Selection by Effective Date
Given schemas exist for payer=Acme, plan=HMO, jurisdiction=CA: v1 effective 2024-01-01 and v2 effective 2025-01-01 When requesting schema with payer=Acme, plan=HMO, jurisdiction=CA, asOf=2025-03-15 Then the response returns versionId=v2 and effectiveDate=2025-01-01 Given the same request with asOf=2024-06-30 Then the response returns versionId=v1 Given a request without asOf Then the response returns the latest version with effectiveDate <= today Given payer, plan, or jurisdiction that does not exist Then the service returns 404 with error code SCHEMA_NOT_FOUND Given any schema response Then it includes versionId, effectiveDate, optional deprecationDate, checksum, and canonical field catalog
Conditional Rule Evaluation (Visits, Diagnoses, Objective Improvement)
Given a schema rule: visitCount >= 6 AND diagnosis is one of M54.5 or S33.5 AND objectiveImprovementPercent >= 20 within 30 days When validating a case with visitCount=7, diagnosis=M54.5, improvement=25% over 28 days Then validation passes and returns ruleId with status=pass Given visitCount=5 Then validation fails with status=fail, ruleId, message, and pointers to offending fields Given diagnosis not in the allowed set but visitCount >= 6 Then validation fails with status=fail and diagnosis qualifier detail Given a nested rule with an OR branch (objectiveImprovementPercent >= 20 OR objectiveScoreDelta >= 2) When objectiveScoreDelta=3 and objectiveImprovementPercent is null Then validation passes Given thresholds are defined as inclusive Then boundary values (e.g., visitCount=6) pass
Evidence Types, Units, and Date Range Validation
Given required evidence types: adherenceRate (percent 0 to 100), formQuality (score 0 to 100), painScreener (NRS 0 to 10), safetyEvents (count >= 0), with acceptable windows relative to serviceDate (adherence within last 14 days, pain within last 7 days) When the payload provides each evidence with specified units and within windows Then validation passes Given adherenceRate reported as 0.87 (decimal) Then the service accepts and normalizes to 87.00% with two-decimal precision Given a painScreener value < 0 or > 10 Then validation fails with error code OUT_OF_RANGE and allowed range details Given an evidence timestamp outside the allowed window Then validation fails with error code OUT_OF_WINDOW and window bounds Given a unit mismatch (e.g., formQuality provided on a 0 to 5 scale) Then validation fails with error code UNIT_MISMATCH and expected unit metadata
Backward Compatibility for In-Flight Submissions
Given an in-flight case created under versionId=v1 When a new version v2 is published Then the in-flight case remains pinned to v1 for all validations and rendering Given a new case created after v2 is published Then it uses v2 by default Given an explicit migrateToVersion=v2 request on an in-flight case Then the system validates against v2, records a migration event with userId and timestamp, and preserves prior results for audit Given a breaking change in v2 Then publish requires a migrationNote and the system prevents auto-migration of in-flight cases
Admin Authoring, Publish, Audit, and Rollback
Given a user with role=SchemaAdmin When they create or edit a schema in Draft Then the system performs schema linting (unique IDs, consistent enums, parsable rules) and prevents publish until all checks pass Given a publish action Then the system assigns a new immutable versionId, records author, timestamp, changeSummary, and effectiveDate, and exposes the version via API Given a breaking change (field removal or enum narrowing) without breakingChange=true and a migrationNote Then publish is blocked with actionable errors Given a rollback to a prior version Then the prior version becomes latest effective, a new versionId is issued for the rollback event, and audit history links the rollback to the reverted version Given a user without SchemaAdmin role Then create, edit, and publish endpoints return 403
Automated Sync Propagation and Availability
Given a new schema is published When querying any node or region of the service Then the new version is available within 120 seconds and the health endpoint reports current versionIds Given consumers with cached schemas Then ETag or Last-Modified headers enable cache revalidation and clients receive 200 or 304 appropriately Given concurrent publishes Then resulting versions are serialized and no partial or duplicate versions are exposed Given a transient outage during sync Then retry and backoff complete sync without data loss and prior versions remain available
Normalized Field Catalog and Consumer Contract
Given the library exposes a canonical field catalog Then each field has a stable canonicalId, name, type, allowedUnits, allowedValues or enums, and description Given a payer-specific field mapping Then the schema links payerField to canonicalId and downstream consumers can resolve to canonical definitions Given an export request Then the service provides machine-readable JSON Schema and OpenAPI artifacts for the selected version Given a consumer requests validation rules Then the API returns rule metadata (ruleId, expression or operator tree, severity, messages, and affectedFields) and examples Given an additive change that introduces a new optional field Then existing consumers continue to validate successfully against the new version
Rules-Based Mapping Engine
"As a physical therapist, I want payer fields auto-populated from my patients’ exercise and screening data so that I don’t have to re-enter information."
Description

Deterministic engine that transforms MoveMate data (adherence rates, rep counts, form-quality scores, pain screener results, safety events, clinician attestations) into payer-required fields. Handles normalization (units, scales, date windows), conditional population, and conflict resolution with explicit precedence rules and fallbacks. Produces field-level completeness and confidence flags for validation, and exposes a contract used by the cover summary generator and export pipeline. Designed for low-latency execution at report generation time and scalable across patient cohorts.

Acceptance Criteria
Deterministic Mapping for Selected Payer Profile
- Given identical MoveMate inputs and a specific payer profile, When the mapping engine runs multiple times, Then the mapped field values and metadata are byte-for-byte identical and produce the same SHA-256 checksum. - Given two different payer profiles selected for the same inputs, When mapping executes, Then only fields required by the selected profile are populated and non-required fields are omitted or set to null as per contract. - Given unordered input events (e.g., exercise sessions out of chronological order), When mapping executes, Then output values are invariant to input ordering and date ranges are computed from sorted timestamps.
Units, Scales, and Date-Window Normalization
- Given inputs with mixed units (e.g., pounds and kilograms), When the payer requires kilograms, Then values are converted to kg with precision of one decimal place and metadata includes source unit and conversion factor. - Given pain scores recorded on 0–10 and 0–100 scales, When the payer requires a 0–10 scale, Then scores are normalized using the defined scale mapping and rounding rule (round half up) and the original scale is recorded in metadata. - Given report_date and payer rule 'last 30 days', When mapping time-bounded metrics, Then only events within [report_date − 30 days, report_date] are included and counts/totals reflect this window. - Given an input value lacking unit metadata and no safe inference available, When normalization is attempted, Then the target field is marked Missing and a gap flag is emitted identifying the field_id and reason.
Conditional Field Population per Payer Checklist
- Given payer rule 'populate adherence_justification if adherence_rate < 80%', When a patient’s adherence_rate is 72%, Then adherence_justification is populated using the configured template with dynamic values and completeness=Complete; When adherence_rate is ≥ 80%, Then the field is omitted or set Not Applicable per contract. - Given safety events within the required window, When mapping safety_summary, Then count and highest_severity are populated; When none exist, Then count=0, highest_severity=None, and confidence=High. - Given pain screener results of 'No pain' and a payer rule that suppresses secondary pain detail when negative, When mapping, Then secondary pain fields are omitted and marked Not Applicable; When positive, Then required detail fields are populated and completeness=Complete.
Conflict Resolution with Explicit Precedence and Fallbacks
- Given conflicting values across sources (clinician_attestation, sensor_derived, patient_reported), When mapping executes, Then precedence is applied as clinician_attestation > sensor_derived > patient_reported and the winning value is stored. - Given a resolved conflict, When output is generated, Then losing values and their sources are recorded in metadata.conflict_details and confidence is set according to the winning source’s confidence table. - Given the highest-precedence source is missing, When mapping executes, Then the engine falls back to the next source, sets completeness=Partial, and emits a gap flag with fallback_used=true; When all sources are missing, Then the field is Missing and export is not blocked if the payer marks the field optional.
Field-Level Completeness and Confidence Flags
- For every mapped field, Given a produced value, When emitting output, Then completeness is one of {Complete, Partial, Missing} and confidence is one of {High, Medium, Low} computed by the configured rule table. - Given a value derived through unit/scale conversion, When flags are computed, Then confidence is downgraded by one level unless the conversion is exact (e.g., unitless counts). - Given a field populated from a fallback source, When flags are computed, Then completeness=Partial and reasons[] includes 'fallback_source'. - Given at least one required field is Missing, When the report summary is requested, Then overall report_status='Has Gaps' and gaps[] includes field_id, reason, and remediation_hint.
Versioned Contract for Cover Summary and Export Pipeline
- Given a consumer requests mapping via API v1, When the engine responds, Then the payload conforms to the versioned schema including field_id, label, value, unit, sources[], conversion, completeness, confidence, reasons[], timestamp, payer_profile_id, and contract_version. - Given a consumer requests an unknown field_id, When the engine validates the request, Then it returns HTTP 400 with error_code='FIELD_UNKNOWN' and does not include a partial mapping for that field. - Given a payer profile contains a deprecated field with a defined replacement, When mapping executes, Then the replacement field is populated and reasons[] includes 'deprecated_mapped_to:<replacement_id>'. - Given the cover summary generator requests the 'summary_view' projection, When the engine executes, Then only the documented subset of fields is returned and all items validate against the projection schema.
Low-Latency Execution and Cohort Scalability
- Given a single patient report generation under nominal load, When mapping executes, Then p95 server-side latency ≤ 200 ms and p99 ≤ 350 ms measured over 1,000 requests. - Given 50 concurrent report generations, When mapping executes, Then p95 latency ≤ 300 ms, error_rate < 0.1%, and no request times out at the configured 2 s timeout. - Given a batch cohort of 500 patients, When processed on the reference worker pool, Then total mapping time ≤ 25 s and throughput ≥ 20 reports/second sustained for the batch.
Gap Detection and Quick-Fix Workflow
"As a physical therapist, I want the system to flag missing requirements and offer one-click fixes so that I can resolve blockers before submitting to payers."
Description

Validation layer that compares mapped data against the selected payer schema to find missing or insufficient items, classifies them by severity (blocker, warning), and proposes actionable remediations. One-click actions include triggering a patient screener, requesting a clinician attestation, scheduling a reassessment, or generating a templated note. Tracks task completion, re-validates automatically, integrates with MoveMate’s nudge system for patient outreach, and surfaces SLA timers and due dates to prevent delays.

Acceptance Criteria
Blocker Gap Detection Against Payer Schema
Given a clinician selects a payer schema and opens a mapped patient report And the schema includes required and optional rules with thresholds When the validation layer runs Then all missing or insufficient items are listed with their rule IDs, rationales, and affected fields And each item is classified as "blocker" if it prevents submission or "warning" otherwise per schema rules And at least one remediation action is suggested for each item And the results render within 2 seconds for reports under 1,000 records And the summary displays total blockers and warnings
Patient Screener Quick‑Fix with Nudge and Tracking
Given a gap indicates a missing pain screener for the last 7 days When the user clicks "Trigger Screener" Then the patient receives the screener via the preferred channel within 60 seconds And a task is created with a due date per payer SLA And a nudge cadence is scheduled per MoveMate defaults When the patient submits the screener Then the task auto-completes, the data links to the report, and the gap re-validates And if the screener meets the threshold, the gap status becomes Resolved
Clinician Attestation Capture and Resolution
Given a gap requires clinician attestation of home-exercise instruction When the user clicks "Request Attestation" Then a templated attestation prefilled with patient and visit details opens for the assigned clinician And the clinician can e-sign and submit within the app Then the attestation is stored with timestamp and user ID, and an audit trail entry is created And the gap re-validates and resolves if attestation satisfies the rule
Reassessment Scheduling to Satisfy Time‑Based Requirement
Given a gap indicates reassessment due within 3 days per payer SLA When the user clicks "Schedule Reassessment" Then the scheduler opens with the patient and payer window prefilled and suggests earliest compliant times And on confirmation, an appointment is created and patient notification is sent And the gap status changes to "Pending Reassessment" with the appointment date as due date When the reassessment data is captured Then re-validation runs and the gap resolves if criteria are met
Templated Note Generation for Narrative Gap
Given a gap requires a progress note narrative containing goals, response, and plan When the user clicks "Generate Templated Note" Then a note template opens prefilled with latest adherence, form-quality metrics, and safety events And required fields are marked and cannot be submitted empty When the clinician finalizes and saves the note Then the note is attached to the report and the gap re-validates and resolves if all required sections are present
Auto Re‑Validation and Gap Status Update
Given any quick-fix task completes or mapped data changes in fields referenced by open gaps When the system receives the event Then the validation layer re-runs for the affected payer schema within 30 seconds And closed gaps move to the Resolved section with a resolution reason And unresolved gaps remain open with an updated last-validated timestamp and validator version
SLA Timers, Due Dates, and Escalations
Given gaps are present for a report with a payer SLA When the validation results display Then each gap shows a due date, days remaining, and color-coded urgency (green >5 days, amber ≤5 days, red overdue) And blocker gaps include a countdown timer visible in the report header When a due date enters amber or red Then an alert is sent to the assigned clinician, and a manager escalation email is sent on overdue And the dashboard counts of at-risk items update within 5 minutes
Cover Summary Generator
"As a clinician, I want a clear cover summary auto-generated per payer so that reviewers can quickly approve without asking for clarifications."
Description

Automated creation of a concise, payer-aligned cover summary that enumerates each checklist item with pass/fail status, supporting evidence references, date ranges, goals, and objective progress markers. Produces branded PDF and web views, with configurable templates per payer and clinic. Ensures readability constraints (page limits, clear sectioning), includes optional clinician signatures, and embeds deep links or QR codes to underlying evidence where permitted.

Acceptance Criteria
Payer-Aligned Checklist Enumeration
Given a patient episode with mapped data for the selected payer template When the Cover Summary is generated Then 100% of payer checklist items are listed in payer-defined order And each item displays a computed Pass or Fail status And each item includes supporting evidence reference ID(s) and timestamp(s) And each item shows the applicable date range, goals, and objective progress markers And an overall compliance tally (passed/total and percentage) is displayed And the summary includes generation timestamp and payer template version
PDF/Web View Parity with Branding and Page Limits
Given clinic branding is configured and a payer template with a configured page limit When generating both Web and PDF views of the Cover Summary Then both views contain identical content, titles, and section ordering And the PDF includes clinic logo, clinic name, NPI, and footer information And the Web view uses print-friendly styles and the same branding elements And the PDF does not exceed the configured page limit And when content would exceed the limit, auto-condense is applied to collapse non-critical sections while preserving checklist results And if content still cannot fit, a non-blocking warning identifies the overflowing sections by name
Evidence Links/QR Embedding with Permissions
Given payer and clinic policies permit evidence linking and patient consent is recorded When generating the Web view Then each evidence reference renders as a secure deep link with time-limited access And when generating the PDF, a QR code is embedded for each evidence reference that resolves to the same secure URL And when linking is disallowed or consent is missing, references render as non-link evidence IDs with a standard notice And all links/QR codes pass a permission check and are redacted when the check fails
Optional Clinician Signature Capture and Placement
Given the clinician toggles Include Signature for the Cover Summary When the summary is generated Then a signature block displays clinician name, credentials, license/NPI, and signature (typed/drawn/uploaded) with date/time And when co-signature is required, multiple signature blocks appear with Signed or Pending status And when any required signature is Pending, the Web view provides a Request Signature action and the PDF is watermarked Draft per template configuration
Gap Flagging with Quick Fix Loop
Given required data for any checklist item is missing or stale beyond template-defined freshness When the summary is generated Then the affected item shows Fail with a specific reason and the missing fields listed And a Fix action is presented that routes the user to the exact data entry screen for that item And after the missing data is entered and the summary is regenerated, the item status updates according to the new data And an audit log records the change including user, timestamp, and fields updated
Template Configuration Per Payer and Clinic
Given an admin creates or edits a Cover Summary template for a specific payer and clinic When the template is saved and set as default for that payer-clinic combination Then summaries for that payer-clinic use that template by default And patient-case-level template overrides are applied when present And template validation prevents saving when required sections or page-limit settings are invalid And each generated summary records the template ID and version used
Accessibility and Readability Constraints
Given Web and PDF outputs must meet accessibility and readability targets When the summary is generated Then body text is at least 11pt (PDF) and 16px (Web) with minimum contrast ratio 4.5:1 And headings follow a consistent hierarchy with tagged structure in PDF And tables include header rows and accessible labels; QR codes include captions And narrative text scores at or below 10th-grade reading level by Flesch-Kincaid And reading and structure checks pass without critical violations
Evidence Traceability and Audit Trail
"As a compliance officer, I want every claim field to link to verifiable source evidence so that audits are fast and defensible."
Description

Field-level provenance that links every generated value to its source events, measurements, and attestations with timestamps, user attribution, and immutable change logs. Provides an exportable audit packet for payer reviews, enforces access controls, and adheres to retention policies. Implements tamper-evident hashing for generated documents and records generation context (schema version, mapping ruleset) to ensure reproducibility.

Acceptance Criteria
Field-Level Provenance Linkage
Given a generated report with computed fields When a reviewer requests provenance for any field Then the system returns a provenance record that includes: field_id, source_event_id(s), measurement_id(s), attestation_id(s), user_id, user_role, UTC ISO 8601 timestamps, original_source_value(s), transformation_applied, mapping_rule_id, and correlation_ids And provenance is available for 100% of generated fields And 95th-percentile response time for provenance retrieval is <= 2 seconds And missing provenance returns an explicit 404 with field_id and is logged as severity=high
Immutable Change Log and Versioning
Given active production data and configuration When any mapping rule, schema, or generation parameter changes Then an append-only change log entry is created with who, what, when, before/after digests, and justification And prior report versions remain immutable and readable And any attempt to alter or delete past log entries is blocked and logged as a security incident And chain-of-hash integrity validation passes for 100% of log sequences
Tamper-Evident Hashing for Generated Documents
Given a newly generated report and associated artifacts When the system finalizes generation Then a SHA-256 hash is computed for each artifact and stored with algorithm, created_at (UTC), and artifact_id And re-hashing the artifact produces the same hash And modifying any byte causes verification to fail with a clear error and audit entry And a public verification endpoint returns hash, algorithm, and verification status within 1 second
Exportable Audit Packet for Payer Review
Given a payer audit request for a report When an authorized user triggers export Then the system generates a single compressed package containing: the report, field-level provenance, source evidence references, mapping ruleset version, schema version, generation timestamp, and access-control manifest And a manifest.json lists all files with byte sizes and SHA-256 checksums And the export completes within 60 seconds for reports under 25 MB of source artifacts And the export action is logged with requester_id, timestamp, and export_checksum
Access Controls and Audit of Access
Given role-based access policies for provenance and audit artifacts When a permitted user attempts to view or export Then access is granted and the event is logged with user_id, role, object_id, action, timestamp, and outcome=granted And when a non-permitted user attempts access Then access is denied with 403 and the event is logged with outcome=denied and reason And cross-tenant access is prevented and verified by automated tests covering at least 3 tenant boundaries
Retention Policy and Legal Hold Enforcement
Given a retention policy of N years and optional legal holds When records exceed N years without legal hold Then they are purged or archived according to policy with a purge log entry per object And records on legal hold are preserved and excluded from purge jobs And scheduled purge jobs run daily and produce a signed summary with counts of deleted, archived, skipped And attempts to delete on-hold records are blocked and logged
Reproducible Regeneration Using Stored Context
Given original input data and stored generation context (schema_version, mapping_ruleset_version, app_build_id, locale) When the report is regenerated Then the regenerated artifacts are byte-identical to the original and hashes match And if any input changed, a new version is produced with a link to the prior version and a machine-readable delta And regeneration requests without complete context fail with 422 listing missing fields And 100% of sampled regenerations (n>=50) pass hash equality in automated tests
Submission Package Builder and Export
"As a billing specialist, I want to export submission packages in formats my payers accept so that I can submit without manual reformatting."
Description

Assembly of payer-specific submission packages that bundle the cover summary, mapped fields, and required attachments. Supports export formats including PDF bundles, payer-portal friendly JSON/XML where available, and healthcare document standards for attachment submission (e.g., 275 where applicable), with secure email/fax fallback and delivery confirmation tracking. Provides file naming conventions, metadata manifests, and integration hooks for EHR/FHIR DocumentReference posting.

Acceptance Criteria
Payer-Specific Package Assembly Completeness Check
Given a patient episode with payer profile X selected and mapped fields and attachment references are available And required attachments for payer X exist in the patient's document library or are marked missing When the user clicks Build Submission Package Then the system assembles a package containing a cover summary, all mapped fields, and all required attachments for payer X And the package validation reports 0 Critical gaps and <= 2 Informational warnings And any missing required element is flagged with a direct fix link to its source And the package build completes within 5 seconds for up to 25 attachments totaling <= 50 MB
PDF Bundle Export (PDF/A) with Section Bookmarks
Given a successfully built package with cover summary and attachments When the user selects Export as PDF Then the system generates a single PDF/A-2b compliant file And the cover summary appears first followed by each attachment in package order And bookmarks are created for the cover summary and each attachment using human-readable titles And page numbers appear on every page And the PDF opens in common viewers without repair warnings And export time is <= 10 seconds for a 50 MB package
Payer-Portal JSON/XML Export with Schema Validation
Given payer X supports machine-readable submission via JSON or XML and a built package is available When the user selects Export as JSON/XML for payer X Then the system outputs a file that validates 100% against payer X's published schema (JSON Schema or XSD) And field names and codes match payer X's specification for adherence, form quality, pain screeners, and safety events And all attachments are referenced by stable IDs with SHA-256 checksums and sizes in a manifest section And the export includes submission metadata (patient ID, episode ID, clinician NPI, payer ID, creation timestamp ISO 8601, software version) And the validation report shows total errors = 0
X12 275 Attachment Package Generation
Given payer X requires HIPAA X12 275 attachments linked to a 278/837 transaction And the package contains at least one attachment and a correlation TRN is available When the user selects Export as X12 275 Then the system generates a 005010X210-compliant 275 transaction with PWK segments per attachment and appropriate identifiers where applicable And binary attachments are base64 encoded and referenced consistently between 275 and the manifest And the file passes validation with an industry-standard X12 validator with 0 errors and 0 severe warnings
Secure Delivery with Email/Fax Fallback and Confirmation Tracking
Given delivery channels are configured for payer X (secure email TLS 1.2+ and fax fallback) And a built package is available When the user sends the submission Then the system prioritizes secure email delivery and records SMTP success with message ID when available And if email delivery definitively fails within 5 minutes, the system automatically initiates fax delivery using the payer's verified fax number And the system records delivery outcome per channel (Queued, Sent, Failed, Confirmed) with timestamps and provider trace IDs And the user can view delivery status updates in-app within 1 minute of state change And all PHI is encrypted in transit and at rest, with no unencrypted temporary files persisting after delivery
FHIR DocumentReference Posting to EHR
Given an EHR integration is configured with a FHIR R4 endpoint, OAuth 2.0 credentials, and a built package When the user posts the submission to the EHR Then the system creates a FHIR DocumentReference with status=current, appropriate type code, subject=patient, author=organization/clinician, and category=administrative And the content array includes the PDF bundle (URL or binary), SHA-256 hash, size, and contentType And the DocumentReference is linked to the correct Encounter/Episode via context.encounter And the POST returns HTTP 201 and the resource is retrievable via GET within 10 seconds And on transient failure (5xx/timeout), the system retries up to 3 times with exponential backoff and logs attempts
File Naming and Metadata Manifest Generation
Given org, payer, patient, and episode context are available and a package is built When any export is produced Then the primary file name follows {payerCode}_{patientLastFirst}_{DOB-YYYYMMDD}_{episodeId}_{exportType}_{YYYYMMDDTHHmmssZ}_{shortUID}.{ext} using only A-Z, 0-9, _ and - And a JSON manifest is generated containing: packageId (UUIDv4), fileName, checksums (SHA-256) for each file, sizes in bytes, createdAt (ISO 8601), createdBy (user ID), payerCode, patientId, episodeId, exportType And all attachment filenames are sanitized to the same character set and are unique And checksums in the manifest match the actual files
Readiness Dashboard and Notifications
"As a clinic lead, I want a dashboard that shows which cases are ready to submit and which need attention so that we can prioritize work and hit deadlines."
Description

Clinician-facing dashboard widget that aggregates payer-readiness status per patient and case, shows blockers, due dates, and trend indicators, and enables bulk actions to launch quick-fix tasks. Offers filters by payer and clinic location, and sends automated notifications to clinicians and patients when items become due or blocked. Embeds directly into existing MoveMate dashboards to minimize context switching and speed throughput.

Acceptance Criteria
Aggregate Payer Readiness per Patient and Case
Given a patient with one or more active cases and an assigned payer, when the dashboard loads, then the widget displays a readiness status per case with an overall percentage and a status label (Ready, At Risk, Blocked). Given readiness is computed from Criteria Mapper outputs, when the percentage is calculated, then it equals satisfied required items divided by total required items for that payer checklist. Given new adherence, form-quality, pain screener, or safety event data arrives, when up to 5 minutes have passed, then readiness metrics refresh automatically or upon manual refresh. Given data for a required field is missing or older than 24 hours, then the case is marked At Risk with reason "Data stale" and the affected fields are identified. Given no payer mapping exists for a case, then the case displays "Payer mapping required" and is excluded from Ready status counts.
Blockers, Due Dates, and Trend Indicators
Given a case has unmet payer-required items, when the widget renders, then blockers are listed with labels, owner (clinician/patient/system), and next action. Given each blocker has a due date, when displayed, then due dates are color-coded: overdue (red), due within 48h (amber), >48h (neutral). Given daily readiness snapshots exist, when the widget displays trend, then a 7-day trend arrow (up/down/flat) and percentage change are shown. Given a blocker is resolved, when the resolution is saved, then it disappears from the blockers list and readiness updates within 5 minutes.
Filters by Payer and Clinic Location
Given multi-select filters for payer and clinic location, when one or more values are selected, then only matching cases are shown and the result count updates accordingly. Given no filters are selected, when the widget loads, then all cases the user is permitted to view are shown. Given a user changes filter selections, when the page is reloaded within 24 hours, then the selections persist for that user session. Given the user types in the payer filter, when searching a dataset up to 1,000 payers, then matching payers appear via typeahead within 200 ms.
Bulk Quick-Fix Task Launch
Given the user selects multiple cases, when "Launch Quick-Fix" is triggered, then tasks are created per case for each blocker with correct assignee inferred from blocker owner and with due dates matching payer requirements. Given at least one task fails to create, when the operation completes, then an error summary identifies failed cases and reasons while successful tasks remain created. Given tasks are successfully created, when confirmation is shown, then it includes the count created and links to task details for navigation. Given a selected case has no blockers, when bulk action runs, then it is skipped and labeled "No blockers" in the confirmation.
Automated Notifications to Clinicians and Patients
Given a blocker is created or becomes due within the next 24 hours, when notification rules run, then the assigned owner receives a notification (in-app and email) within 10 minutes, respecting clinic quiet hours (no sends 9pm–7am local). Given a blocker becomes overdue, when daily notification time occurs, then a reminder is sent at 9am local each day until resolved or snoozed. Given multiple blockers for the same owner and case occur on the same day, when notifications are dispatched, then they are bundled into a single digest with distinct items. Given a user has opted out of email, when notifications send, then only in-app notifications are delivered and all attempts are logged with timestamp, recipient, channel, and delivery outcome.
Embedded Widget Performance and Accessibility
Given the widget is embedded in the existing MoveMate clinician dashboard, when the dashboard loads on mid-tier hardware and a typical clinic network, then widget time-to-interactive is under 2.0 seconds. Given viewport widths from 320px to 1440px, when the widget is rendered, then layout remains usable without horizontal scrolling and meets WCAG 2.1 AA contrast and keyboard navigation requirements. Given supported browsers (latest stable Chrome, Safari, Edge), when the widget is used, then all core functions operate without critical defects. Given the feature flag is disabled, when the dashboard loads, then the widget does not render and adds no more than 50 ms to page load time.

Evidence Clips

Curates lightweight, timestamped video snippets around key improvements and safety events, with on‑frame annotations for form corrections. Embeds thumbnails in the PDF and attaches originals to the FHIR bundle with secure deep links, delivering convincing proof without bulky uploads or privacy risk.

Requirements

Auto-Clip Generation & Event Triggering
"As a physical therapist, I want automatic short clips captured around key events so that I can quickly see objective improvements and risks without scrubbing full sessions."
Description

Continuously buffer and automatically generate 5–15 second video snippets around detected milestones (e.g., improved range of motion vs. baseline), safety events (e.g., loss of balance), and computer‑vision form error flags during MoveMate exercise sessions. Implement pre/post‑roll capture using a rolling buffer, tag each clip with session ID, exercise ID, rep number, timestamps, event type, and confidence score, and deduplicate overlapping events. Provide configurable trigger rules per protocol (threshold deltas, minimum confidence, cooldowns, max clips per session) and guardrails to avoid over‑capturing. Perform detection on‑device where available for privacy, with fallback to server‑side scoring of pose streams when consented. Persist structured metadata for downstream annotation, review, FHIR packaging, and reporting.

Acceptance Criteria
Rolling Buffer Pre/Post-Roll Clip Capture
Given an active session and a triggerable event at t0 with preRollSec=3, postRollSec=7, clipMinSec=5, clipMaxSec=15, When the system generates a clip, Then clip.startTime = max(session.startTime, t0 - 3s), clip.endTime = min(session.endTime, t0 + 7s), and clip.duration is between 5s and 15s inclusive. Given the rolling buffer holds at least preRollSec of frames, When an event fires, Then the clip includes pre-roll frames up to preRollSec unless trimmed by session start. Given t0 occurs within postRollSec of session end, When a clip is generated, Then the clip endTime equals session.endTime and the duration remains within [clipMinSec, clipMaxSec].
Configurable Trigger Rules and Thresholds
Given a protocol with minConfidence=0.80, romDeltaThreshold=10 deg, cooldownSec=5, maxClipsPerSession=6, When an improvement candidate has confidence<0.80 or romDelta<10 deg, Then no clip is created and the candidate is logged as suppressed with reason=threshold_not_met. Given a candidate meets all thresholds and no cooldown is active for its eventType, When processed, Then a clip is created and a cooldown for that eventType is started for 5s. Given a session has produced 6 clips and maxClipsPerSession=6, When additional candidates arrive, Then no further clips are created and each is logged as suppressed with reason=max_reached.
Clip Metadata Tagging and Persistence
Given a clip is generated, When metadata is persisted, Then it includes non-empty sessionId, exerciseId, clipId, eventType in {improvement, safety, form_error}, repNumber (integer or null), confidence in [0,1], eventTimestamp, clipStart, clipEnd in ISO 8601 UTC. Given a clip is generated, When retrieved by sessionId via the Clips API, Then the clip record is returned and all persisted fields match the stored values. Given repNumber is not resolved at trigger time, When metadata is persisted, Then repNumber is null and no placeholder value is used.
Event Deduplication and Prioritization
Given two or more event candidates produce overlapping pre/post windows, When processing triggers, Then only a single clip is created for the overlapping interval. Given overlapping candidates have different confidences and eventTypes, When selecting the primary event for the clip metadata, Then the candidate with highest confidence is chosen; if tied, prefer safety over form_error over improvement; if still tied, prefer the earliest eventTimestamp. Given deduplication is applied, When persistence occurs, Then only one media asset is stored and suppressed candidates are logged with reason=deduplicated.
On-Device Detection with Consent-Based Server Fallback
Given the device supports on-device detection and consentForServerProcessing=false, When a session runs, Then all detection and triggering occur on-device and no pose or video data is transmitted off-device. Given the device lacks on-device detection and consentForServerProcessing=true with network connectivity, When a session runs, Then pose streams (not raw video) are transmitted to the server for scoring and clips are triggered from server scores. Given consentForServerProcessing=false and no on-device capability, When a session runs, Then no off-device processing occurs and no clips are created, and a notice is recorded for clinician review. Given a clip is generated, When metadata is persisted, Then detectionMode is set to 'on_device' or 'server' corresponding to the processing path used.
Over-Capture Guardrails and Session Limits
Given a protocol with cooldownSec=5 and maxClipsPerSession=6, When 20 form_error candidates occur within 10s, Then no more than 6 clips are created and no two clips for form_error start within 5s of each other. Given a single safety event yields multiple above-threshold candidates within 1s, When processed, Then exactly one clip is created for that safety event. Given guardrails suppress candidates, When session metrics are computed, Then counts of created and suppressed events by reason (threshold_not_met, cooldown_active, max_reached, deduplicated) are available for reporting.
On‑Frame Annotations & Metric Overlays
"As a clinician, I want on-frame annotations that highlight the exact form issue and metrics so that I can explain corrections clearly and justify plan changes."
Description

Render time‑synced, non‑destructive overlays on clips to highlight form corrections and improvements, including skeleton lines, joint angle measurements, arrows, and text callouts (e.g., "Knee valgus 8° → 3°"). Store overlays as a separate JSON track to enable re‑rendering for different outputs (mobile, web, PDF thumbnails) and preserve originals. Provide branded, accessible styles (color‑blind safe palette, readable fonts), auto‑position labels to avoid occlusions, and include auto‑generated captions referencing rep number and metric deltas. Support clinician edits during review (add/remove callouts, tweak frames) without re‑encoding the source video.

Acceptance Criteria
Time‑Synced Non‑Destructive Overlay Rendering
Given a recorded exercise clip and an associated overlay track When the clip is played back in the viewer Then every overlay element appears/disappears within ±33 ms of its target timestamp at 30 fps And the original video file’s checksum (SHA‑256) remains unchanged before and after rendering And the user can toggle overlays on/off without affecting playback smoothness (≥30 fps on test device) And export of the clip with burn‑in overlays produces a separate derived file while preserving the original source asset
Versioned Overlay JSON Track for Re‑Rendering Across Outputs
Given an overlay JSON saved with schema version v1 When rendering for mobile (1080×1920), web (1280×720), and PDF thumbnail (width 320 px) Then all overlay positions/paths scale correctly with absolute drift ≤ max(2 px, 1% of dimension) And the visual content (elements, text, colors) is consistent across outputs And the renderer can re‑generate outputs solely from the JSON plus the original video (no re‑encode of source) And invalid/missing JSON fields are reported with a structured error code and do not crash the renderer
Annotation Types Coverage and Metric Accuracy
Given a test clip with known pose landmarks and angle ground truth When overlays are generated Then skeleton lines for all tracked joints are rendered with ≥95% joint visibility across frames And joint angle readouts match ground truth with mean absolute error ≤ 2.0° and max error ≤ 5.0° And directional arrows and text callouts anchor to the intended joints/segments within ≤5 px And text callouts support formatted deltas (e.g., “Knee valgus 8° → 3°”) with correct units and rounding (1 decimal place)
Accessible Branded Styles Compliance
Given the default branded overlay theme When overlays are rendered on light and dark video backgrounds Then text and key lines meet WCAG AA contrast (≥4.5:1 for normal text, ≥3:1 for large text ≥18 pt/14 pt bold) And the color palette passes color‑vision deficiency simulations (Deuteranopia/Protanopia/Tritanopia) with categorical distinguishability (ΔE ≥ 20) And font sizes are ≥14 sp on mobile and ≥12 pt on web/PDF with line height ≥1.2 And non‑text annotations have textual equivalents in captions or alt text for PDF thumbnails
Auto‑Positioned Labels Avoid Occlusion
Given frames containing subject skeleton and face bounding box When label auto‑placement runs Then labels do not overlap keypoints, face box, or measurement markers and maintain ≥8 px padding And collision resolution achieves zero label‑to‑label overlap in ≥99% of frames on the test set And for frames where no non‑occluding space exists, labels render with leader lines without covering key anatomy And average auto‑placement compute time ≤10 ms per frame on target device
Auto‑Generated Captions with Rep and Metric Deltas
Given a set of detected reps with per‑rep metrics When captions are generated Then each rep produces a caption in the format “Rep N: <metric> <start> → <end> (Δ<delta>)” And caption cue start/end times align with rep boundaries within ±100 ms And captions export to WebVTT and SRT and embed as alt text for PDF thumbnails And captions are locale‑ready with placeholders supporting at least en‑US and es‑ES
Clinician Editing Without Re‑Encoding
Given a clinician opens a clip with overlays in review mode When they add, remove, or adjust overlay elements or keyframes Then the system saves a new overlay JSON version with a timestamped revision id And the source video file hash remains unchanged and no transcode job is triggered And updated renders reflect edits within ≤2 s for a 30 s clip And undo/redo of the last 20 edit actions is supported within the same session
Privacy Redaction & Consent Controls
"As a clinician responsible for privacy, I want automatic redaction and consent controls so that evidence can be shared and stored securely without exposing PHI."
Description

Apply privacy‑preserving processing to all candidate clips: face detection and blurring for patient and bystanders, background blur or crop to region of interest, and automatic audio removal unless explicitly retained. Gate clip generation and sharing behind patient consent captured and stored as a FHIR Consent resource with purposes (care, payment, operations) and retention settings. Enforce a policy engine to disable clips for restricted contexts (e.g., minors, location rules) and to restrict server‑side processing when not consented. Keep candidate clips on‑device and encrypted until clinician approval; encrypt at rest and in transit; minimize retention of raw buffers; and log access for audit.

Acceptance Criteria
Face and Bystander Blur Applied to All Clips
Given a candidate clip is generated on-device for an exercise When frames are processed Then all detected faces (patient and bystanders) are blurred in every frame with a blur radius >= 16px covering the face bounding box plus >= 10px margin Given a new face appears mid-clip When the frame containing the new face is processed Then that face is blurred in that frame and all subsequent frames Given face detection confidence for a region falls below 0.6 When processing that frame Then conservative blurring is applied to the suspected region to prevent exposure Given the final clip is validated When a secondary face detector (IoU >= 0.2 to redaction boxes) scans all frames Then zero unblurred faces are detected Given clip metadata is generated When inspected Then it records redaction=true and faceRedactedCount >= 1 if any faces were present
Background Obfuscation to Region of Interest
Given an exercise with a computed region of interest (ROI) When generating a clip Then areas outside the ROI are either blurred with sigma >= 12 or cropped out while keeping the entire ROI visible in all frames Given OCR analysis runs on non‑ROI pixels When evaluating the output clip Then no text outside ROI is readable (OCR confidence <= 0.6 across all frames) Given background blur is enabled by default When a clinician changes the setting to crop Then face/bystander blurring remains enforced and cannot be disabled Given identifiable background details are present When the clip is processed Then personally identifying details outside the ROI are not legible in the output
Default Audio Removal with Explicit Retention Control
Given a candidate clip without explicit patient consent for audio retention When the clip is generated Then the output file contains no audio stream and the container audio track count equals 0 Given explicit consent for audio retention exists for the current purpose When the clip is generated Then the original audio is preserved and clip metadata includes audioRetained=true and a consentId reference Given a clinician attempts to enable audio without appropriate consent When saving the clip Then the action is blocked with an inline error and no audio is saved Given consent permits audio for purpose=care only When attempting to share the clip for purpose=payment Then sharing with audio is blocked and requires new consent
Consent-Gated Clip Generation and Sharing (FHIR Consent)
Given there is no active FHIR Consent resource for the patient covering purposes care|payment|operations with a retention setting When a user attempts to generate or share a clip Then the action is blocked and no clip leaves the device Given an active FHIR Consent exists with purposes including the requested purpose and a retention period When generating or sharing a clip Then the action succeeds and the clip metadata stores a reference to Consent.id and purpose Given consent is revoked or expires When deep links or shares are used after revocation/expiry Then access is disabled within 5 minutes and links return 401/403 Given a clip is exported in a FHIR Bundle When inspected Then the bundle includes the referenced Consent resource and the clip attachment references it
Policy Engine Blocks Restricted Contexts
Given patient.age < 18 per demographic data When attempting to generate or share a clip Then the policy engine denies the action with errorCode=POLICY_MINOR and no clip is created Given device location or tenant policy matches a restricted region When attempting server-side processing Then processing is denied (HTTP 403, errorCode=POLICY_LOCATION) and no data is uploaded Given no consent for server-side processing When a workflow attempts to invoke server services Then the request is blocked and on-device processing is used or the action is aborted Given a policy decision is made When viewing audit logs Then the decision is recorded with ruleId, subject (patientId), actor, action, and outcome
On‑Device Processing and Encryption Until Clinician Approval
Given a candidate clip is created When the clinician has not yet approved it Then the clip remains only on-device and no upload/network egress occurs Given the clip is stored on-device When inspecting storage Then the file is encrypted at rest using the platform secure keystore and is unreadable without app context Given the clinician approves the clip When upload occurs Then transfer uses TLS 1.2+ and succeeds only after approval is recorded Given raw camera/sensor buffers are generated during processing When processing completes or is canceled Then raw buffers are zeroized and deleted within 60 seconds Given the app crashes before approval When the device restarts Then the encrypted candidate clip remains inaccessible without re-authentication
Encryption, Retention Minimization, and Audit Logging
Given clips are stored server-side When inspecting storage configuration Then data is encrypted at rest with AES-256 equivalent and keys are managed by a KMS with rotation <= 90 days Given any clip or consent data is transmitted When inspected via security tests Then transport uses TLS 1.2+ with HSTS enabled and rejects weak ciphers Given raw buffers and intermediate files exist server-side When scheduled retention jobs run Then raw buffers are purged within 24 hours and final assets follow the consent retention period Given a deep link is created for sharing When the link expires per policy Then it becomes unusable and returns 401/403 and is logged Given any access (create/view/share/download/delete) of a clip occurs When querying audit logs Then an immutable record exists with timestamp, userId, patientId, clipId, consentId, purpose, action, outcome, and ip, retrievable within 2 seconds for the last 30 days
PDF Thumbnail Embedding & Report Layout
"As a clinician, I want thumbnails embedded in the PDF with captions and links so that I can share concise evidence in reports without creating oversized files."
Description

Generate high‑quality stills from each approved clip (e.g., peak angle frame, moment of safety event) and produce captioned thumbnails with alt text, event icons, and timecodes. Embed thumbnails into MoveMate’s existing PDF outcome reports within a dedicated Evidence Clips section, maintaining report size budgets (e.g., ≤1 MB additional per report) via smart compression and capped thumbnail count with overflow to an appendix. Make thumbnails clickable in digital PDFs via link annotations or QR codes that open secure deep links to the original clip. Provide layout rules for grid placement, caption truncation, page breaks, and localization of labels.

Acceptance Criteria
Peak/Safety Frame Thumbnail Generation
Given an approved clip with a peak-angle annotation at t=1.23s When thumbnails are generated for the report Then a still frame is extracted at t=1.23s ± 40ms and saved at a minimum long-edge of 640 px in sRGB color Given an approved clip with a safety-event annotation at t=2.50s When thumbnails are generated for the report Then the still frame corresponds to the safety-event timestamp within ±40ms Given multiple peak-angle annotations exist When selecting the representative frame Then the frame with the maximum measured angle value is used Given the annotated frame is significantly blurred (variance of Laplacian < 100) When generating the thumbnail Then the nearest sharper frame within ±3 frames is substituted; otherwise the annotated frame is used Given a batch of 20 thumbnails When generation runs on the standard worker Then the average generation time per thumbnail is ≤500 ms and no single thumbnail exceeds 1,500 ms
Captions, Icons, Alt Text, and Timecodes
Given a generated thumbnail When composing its caption Then the caption includes the localized clip title, event label, and timecode in HH:MM:SS format and is truncated to ≤120 characters with an ellipsis if longer Given a generated thumbnail When setting accessibility fields Then non-empty alt text is embedded containing the localized event label and timecode Given a thumbnail for a safety event When rendering the overlay icon Then a warning icon of 24 px (at 150–300 dpi effective) appears in the top-left corner with a minimum contrast ratio of 3:1 against the background Given a thumbnail with on-frame text annotations When rendering the caption and overlays Then all text elements meet a contrast ratio of ≥4.5:1 and do not occlude key joints or markers by keeping a 12 px padding from detected landmarks
Evidence Clips Section Embedding
Given at least one approved clip exists When generating the PDF Then a dedicated section titled with the localized label for "Evidence Clips" is added and included in the document outline/bookmarks Given no approved clips exist When generating the PDF Then the Evidence Clips section is omitted entirely without leaving blank space Given thumbnails are embedded in the section When viewing the PDF Then each thumbnail appears with its caption and alt text, and the section maintains 12 pt outer margins and 8 pt inner gutters without overlapping other report elements
Report Size Budget and Compression
Given a baseline PDF generated without the Evidence Clips section When the Evidence Clips section is added Then the total file size increase is ≤1.0 MB (1,048,576 bytes) Given thumbnails exceed the initial size budget at default quality When adaptive compression is applied Then all thumbnails are recompressed to meet the ≤1.0 MB budget while maintaining a minimum SSIM of 0.92 per thumbnail Given the ≤1.0 MB budget cannot be met even at the minimum allowed quality When finalizing the PDF Then the generator reduces included thumbnails in descending priority order (safety events before improvements) until the budget is met and records the reduction in generation logs
Thumbnail Overflow to Appendix
Given MAX_THUMBNAILS_PER_REPORT = 12 (configurable) When there are more than 12 approved thumbnails Then the first 12 appear in the Evidence Clips section and all remaining thumbnails are placed in an automatically generated appendix titled with the localized label for "Evidence Clips Appendix" Given thumbnails overflow to the appendix When rendering the main section Then a notice line "See Appendix for X more clips" appears, with X equal to the overflow count Given thumbnails are placed in the appendix When viewing the appendix pages Then thumbnails use the same layout, captions, icons, and links as the main section and page numbering continues the document sequence
Clickable Thumbnails and QR Codes to Secure Deep Links
Given a digital PDF viewer that supports link annotations When a user clicks a thumbnail Then a secure deep link (HTTPS) opens the MoveMate viewer at the exact clip and timestamp within ±0.2 s Given a deep link token older than 7 days from report generation When the link is requested Then the service responds with HTTP 403 and a user-friendly expiration message Given a printed or non-interactive PDF scenario When a user scans the QR code rendered beneath a thumbnail Then the same deep link opens successfully; the QR code is at least 20 mm square and decodes reliably at 150 dpi print in ≥9/10 scans under normal indoor lighting
Layout Grid, Page Breaks, and Localization
Given the PDF page size is US Letter or A4 in portrait When laying out thumbnails Then a 2-column grid with 8 pt gutters is used; for page widths < 500 pt, a 1-column layout is used Given a row of thumbnails would be split by a page break When paginating Then the entire row moves to the next page and captions remain attached to their thumbnails without orphaning Given long captions When rendering within the grid Then captions are limited to 2 lines at 10 pt with ellipsis and the full text remains available via the PDF alt text Given the application locale is set to en-US or es-ES When generating labels for section titles, event labels, and notices Then the corresponding localized strings are used consistently throughout the Evidence Clips section and appendix; if a translation is missing, en-US is used as a fallback
Secure Deep Links & FHIR Packaging
"As a health IT admin, I want clips attached to the FHIR bundle with secure expiring deep links so that downstream systems can access evidence compliantly."
Description

Attach approved clips to the patient’s FHIR Bundle using Media resources that reference Binary payloads, and include DocumentReference entries for the PDF report. Issue patient‑scoped deep links with signed, expiring tokens (short TTL, refresh on authenticated access) and support revocation and re‑issuance. Store originals in encrypted media storage with object‑level ACLs; include event metadata in FHIR extensions (rep number, metric deltas, confidence). Enforce access control via clinician/patient roles, log all retrievals for audit, and validate that external EHRs can ingest bundles and resolve links within permitted windows.

Acceptance Criteria
FHIR Bundle: Media/Binary and PDF DocumentReference Integrity
Given an approved evidence clip and PDF report for a patient When the system builds the patient FHIR R4 Bundle Then the Bundle includes one Media resource per approved clip with status=completed, subject referencing the correct Patient, and content.attachment.url or data populated And each Media.content.attachment.url resolves to a Binary resource in the same Bundle or an allowed external Binary url And each Binary has the correct contentType for the clip and either inline data or a resolvable secure url And the Bundle includes one DocumentReference for the PDF report with content.attachment.contentType=application/pdf and subject referencing the Patient And all inter-resource references resolve with zero FHIR validator errors and no missing references
Deep Links: Signed Tokens with Short TTL and Refresh on Authenticated Access
Given a deep link is generated for a patient-scoped clip or PDF When the link is issued Then the token is cryptographically signed and scoped to the patient and the target resource id And the token time-to-live is less than or equal to 15 minutes from issuance And requests made after expiry receive 401 or 403 without leaking resource existence And an authenticated user accessing a valid link receives a refreshed link with a new expiry while preserving the original authorization scope And each access event is recorded with timestamp, user id (if present), patient id, resource id, IP, and user agent
Deep Link Revocation and Re‑issuance
Given an active deep link token exists for a resource When a clinician or patient revokes access Then the revoked token becomes unusable within 60 seconds across caches and CDNs And subsequent requests using the revoked token receive 403 And re-issuing access creates a new token with a unique identifier and later expiry And the audit log records the revocation action and any post-revocation access attempts
Encrypted Storage and Object‑Level ACLs for Originals
Given original video clips are written to media storage When the object is stored Then server-side encryption at rest is enabled with managed keys (e.g., KMS) using AES-256 or stronger And object-level ACLs restrict read access to the application service principal only And unauthenticated or public requests to the object endpoint return 403 And bucket or container listing is disabled for anonymous principals And TLS 1.2 or higher is enforced for all in-transit retrievals
Media Event Metadata via FHIR Extensions
Given a Media resource represents an evidence clip with event metadata When the Media is added to the Bundle Then the Media contains extensions conveying rep number, metric deltas, and confidence And rep number is represented as an integer extension with valueInteger >= 1 And each metric delta is represented as an extension with a coded metric identifier and valueQuantity (with unit and system) or valueDecimal And confidence is represented as valueDecimal in the inclusive range [0.0, 1.0] And the extension URLs are canonical, non-conflicting, and pass FHIR validation
Role‑Based Access Control and Permitted Window Enforcement
Given a user attempts to access a clip or PDF via deep link or application UI When the user is authenticated as a patient Then access is allowed only to resources where subject matches the patient’s identifier and within the permitted time window And when the user is authenticated as a clinician, access is allowed only for patients on the clinician’s assigned panel And unassigned clinicians and unauthenticated users receive 403 or 401 respectively And access outside the permitted window is denied with 403 even if a token is presented And all retrievals are logged with role, decision, and authorization context
External EHR Ingestion and Link Resolution
Given the patient FHIR Bundle is sent to an external FHIR R4 endpoint When the endpoint validates and ingests the Bundle Then the endpoint returns a 2xx status and persists Media, Binary, and DocumentReference entries And the external system can resolve deep links within the token window and receives 200 And the same links return 401 or 403 after expiry or revocation And the Bundle validates with zero errors against FHIR R4 and any targeted implementation profiles And end-to-end tests pass against at least two distinct FHIR R4 test endpoints
Clinician Review & Approval Workflow
"As a clinician, I want a review queue to approve and curate clips so that only clinically relevant, accurate evidence is shared with patients and payers."
Description

Provide a review queue listing auto‑generated candidate clips with key metadata (event type, metrics, confidence) and preview playback with overlays. Enable approve/reject, edit captions, toggle overlays, reorder for report placement, and tag as "Safety" or "Improvement" to drive report sections and patient nudges. Support bulk actions, keyboard shortcuts, and version history for changes. On approval, trigger PDF thumbnail generation and FHIR packaging; on rejection, purge media per policy. Record reviewer identity and timestamps for audit and analytics.

Acceptance Criteria
Review Queue Shows Candidate Clips with Key Metadata
Given a clinician with permission to review Evidence Clips is logged in When they open the Review Queue Then the queue lists auto-generated candidate clips from the clinician’s patients captured in the last 30 days by default And each row displays: thumbnail, patient name, exercise name, capture timestamp (local), event type, primary metrics, model confidence (0–100%), and clip duration And the list loads at least 50 items within 2 seconds on a typical broadband connection And the list supports filtering by patient, exercise, event type, and status (New/Approved/Rejected) and sorting by capture timestamp and confidence And selecting a row highlights it and enables preview
Clip Preview with Overlays and Caption Editing
Given a candidate clip is selected When the clinician clicks Preview Then the video plays in-line with frame-accurate scrubbing and playback controls And on-frame annotations/overlays are visible by default and can be toggled on/off And the overlay toggle state persists per clip for the current session And the caption field is editable, enforces a 300-character limit, and prevents saving empty captions And clicking Save Caption persists the update within 1 second and records a version entry for the caption change
Approve Action Generates Artifacts and Updates Status
Given a selected clip in New status When the clinician clicks Approve or presses A Then the clip status changes to Approved and the item is marked accordingly And a PDF thumbnail image is generated and queued for inclusion in the next report And the original clip is packaged into a FHIR DocumentReference with a secure, time-bound deep link that requires authenticated access and expires within 24 hours unless refreshed by an authorized user And the background job reports completion within 2 minutes or surfaces an actionable error message with retry And the approval action, reviewer identity, and timestamp are recorded in the audit log
Reject Action Purges Media and Excludes from Reports
Given a selected clip in New status When the clinician clicks Reject or presses R and confirms Then the clip status changes to Rejected and it is removed from any pending report payloads And no FHIR resources or PDF thumbnails are generated for the clip And all associated media and derived assets are purged from transient storage per policy within 15 minutes, with purge success recorded And the rejection action, reviewer identity, and timestamp are recorded in the audit log
Tagging and Reordering Drive Report Sections and Placement
Given an approved clip is visible in the review queue When the clinician tags it as Safety or Improvement Then the selected tag is saved and displayed as a badge on the item And the tag determines the section placement in the generated PDF report (Safety section vs Improvement section) And when the clinician reorders approved clips via drag-and-drop, the new order is persisted and used in the PDF thumbnail layout And approving a tagged clip triggers the corresponding patient nudge template within 15 minutes
Bulk Actions and Keyboard Shortcuts
Given multiple New clips are visible When the clinician multi-selects up to 100 clips Then they can bulk Approve, Reject, and Tag (Safety/Improvement) the selection And bulk actions display progress and report partial failures with per-item error messages And the following shortcuts are available and discoverable via a help overlay: A Approve, R Reject, E Edit caption, O Toggle overlays, J Next, K Previous, Cmd/Ctrl+Click multi-select And shortcuts work without overriding essential browser defaults and are announced to assistive technologies
Version History and Audit Logging
Given any change to a clip’s caption, tag, approval status, or order When the clinician opens Version History for that clip Then a chronological list shows each change with before/after values, reviewer identity, and UTC timestamp And history entries are immutable and cannot be edited or deleted by end users And the audit log is queryable for analytics by user, action type, and date range And admins can export the history to CSV
Storage, Performance, and Sync Reliability
"As a mobile user, I want uploads to be fast, small, and resilient so that clips sync reliably even on poor networks and don’t drain my device."
Description

Transcode clips to efficient codecs (H.264/HEVC) with targets such as 720p@24fps and 1–2 Mbps, enforcing per‑clip size limits (e.g., ≤3 MB) and per‑session quotas to control storage costs. Provide background upload with retry/backoff, offline queueing, and deduplication using perceptual hashing. Apply lifecycle policies: retain raw buffers ≤30 days, keep approved compressed clips per consent retention, and purge unapproved candidates per policy. Monitor device resource use (CPU, battery, thermals) to avoid degrading exercise sessions, and expose health metrics and alerts in the admin dashboard.

Acceptance Criteria
Transcoding Targets and Per-Clip Size Limit
Given a raw Evidence Clip is captured When transcoding completes Then the output codec is H.264 or HEVC, resolution ≤ 1280x720, frame rate between 23 and 25 fps, and average bitrate between 1 and 2 Mbps And the file size is ≤ 3 MB Given the transcoded output exceeds 3 MB When post-processing runs Then the clip is re-encoded or trimmed to ≤ 3 MB, else it is marked rejected with reason "oversize" and not uploaded Given a clip passes validation Then metadata records codec, resolution, fps, bitrate, file size, and validation_status = "pass"
Per-Session Storage Quota Enforcement
Given a session has a configured quota (size in MB and/or count of clips) When saving a new transcoded clip would exceed the quota Then the save is blocked, a "quota_exceeded" event is logged, and the user is notified non-intrusively And no upload is attempted for the blocked clip Given the session quota is not exceeded When clips are saved Then total storage used and clip count remain ≤ the configured limits for the session
Background Upload with Retry, Backoff, and Offline Queue
Given network is unavailable or unstable When an upload attempt fails with a retriable error (timeouts or 5xx) Then the clip enters an offline queue with exponential backoff (min 1 minute, max 30 minutes) and the queue persists across app restarts Given connectivity is restored and policy allows (Wi-Fi or cellular per settings) When the next backoff window opens Then queued clips resume upload and reach "uploaded" state without user action Given an unrecoverable error (4xx other than 429) When retry policy evaluates Then the clip is marked "failed" with error_code and no further retries occur Given the app is backgrounded When background execution limits allow Then uploads continue; otherwise the queue resumes automatically on next foreground
Perceptual-Hash Deduplication
Given two clips produce a perceptual hash similarity ≥ 95% When the second is saved Then it is marked "duplicate_of=<clip_id>" and is neither stored twice nor uploaded again Given two clips have similarity < 95% When saved Then both are stored and uploaded independently Given a duplicate is detected after an upload has begun When dedup check runs Then the duplicate upload is canceled and only the original completes Given a dedup decision is made Then the hash and similarity score are recorded in metadata for audit
Lifecycle Retention and Purge Policies
Given raw capture buffers exist When 30 days have elapsed since capture Then raw buffers are purged automatically unless flagged for active review, and the purge is logged with counts Given a compressed clip is approved and a consent retention policy exists When the retention end date is reached or consent is withdrawn Then the clip is purged within 24 hours and the action is auditable Given unapproved candidate clips exist When the configured candidate retention window elapses Then they are purged and not included in exports or FHIR bundles Given a purge cycle runs Then storage utilization reflects reclaimed space and no dangling references remain in indices or bundles
Resource-Aware Capture and Throttling
Given an exercise session is active When device CPU > 80% or thermal state is serious-or-higher or battery < 15% on battery power Then the app reduces encoder complexity and defers non-critical uploads to keep dropped frames ≤ 5% over baseline Given mitigation cannot keep dropped frames ≤ 5% over baseline for 10 seconds When thresholds persist Then the app pauses new clip generation, logs "resource_throttle", and continues rep counting without crashing Given resources recover below thresholds for 30 seconds When monitoring detects recovery Then clip generation and background uploads resume automatically Given resource monitoring runs during sessions Then its own overhead remains ≤ 2% CPU on average
Admin Dashboard Health Metrics and Alerts
Given an admin views the dashboard When metrics load Then they see for the last 24 hours: transcoding success rate, median transcoding latency, upload success rate, average clip size, purge counts, storage used vs quota, devices with resource throttling events, and current upload backlog Given any metric breaches its threshold (upload success rate < 95%; average clip size > 3 MB; backlog > 100 clips for > 30 minutes; devices with throttling > 5% of sessions) When alerting evaluates Then an alert appears in the dashboard and is sent via configured channels (email/webhook) within 5 minutes Given a metric or alert is displayed When the user drills down Then filtering by clinic, clinician, patient, and session is available and timestamps are shown in UTC with last-updated time

Appeal Builder

Turns denials or ‘need more info’ requests into a guided, ready‑to‑send appeal pack in minutes. Auto‑cites the payer’s policy language, contrasts the patient’s metrics against thresholds and cohort norms, and stitches in exception notes—raising overturn rates while cutting manual write‑ups.

Requirements

Payer Policy Auto-Citation Engine
"As a clinic admin, I want the system to auto-cite the correct payer policy sections so that I can produce compliant, accurate appeals without manually searching policy documents."
Description

Automatically identifies and cites the payer’s relevant policy language based on denial reason, plan, and effective dates. Ingests policies via API, web scraping, and manual uploads, extracts coverage criteria and thresholds, and maps them to standardized denial codes. Generates precise citations with section/paragraph references and captures the policy version used. Integrates with MoveMate’s patient profile to tailor citations to diagnosis and CPT/HCPCS codes. Provides confidence scoring, human review mode, and fallbacks when policy retrieval fails. Ensures all citations are stored with immutable references for audit and reuse.

Acceptance Criteria
Auto-selects Correct Payer Policy by Plan and Effective Date
Given a denial containing payer, plan identifier, denial reason, and dates of service And multiple policy versions exist for the payer and plan When the engine searches for applicable policy language Then it selects the policy version whose effective dates cover the dates of service And prefers plan-specific over line-of-business or national policies when multiple match And records payer name, plan ID, policy ID, and effective-from/to in the citation metadata
Policy Ingestion and Versioning via API, Scrape, and Manual Upload
Given a policy is provided via API, discovered via web page, or uploaded as PDF/Doc When the ingestion job runs Then the policy content is captured with source type, source URL/file ID, retrieval timestamp, and effective dates And a normalized plain-text+structure representation is stored with a computed content hash And a new version is created when the content hash or effective dates change, without overwriting prior versions
Coverage Criteria Extraction and Mapping to Denial Codes
Given an ingested policy version When the extraction pipeline processes the document Then coverage criteria statements and quantitative thresholds relevant to therapy services are identified And each extracted criterion is mapped to one or more standardized denial codes based on the policy language And each mapping includes a confidence score between 0.0 and 1.0
Patient-Tailored Citation Incorporating Diagnosis and CPT/HCPCS
Given a patient profile containing diagnosis (ICD-10) and CPT/HCPCS codes and recorded metrics And a selected policy criterion mapped to those codes When generating a citation for the appeal Then the citation includes only criteria applicable to the patient’s diagnosis and billed codes And contrasts the patient’s metrics against the policy thresholds with units and dates
Citation Generation with Section/Paragraph, Policy Version, and Immutable Storage
Given selected policy criteria and the policy document structure When composing the citation Then the output includes exact section/paragraph identifiers, policy title, policy number, and effective-from/to dates And the citation is stored immutably with a unique ID, source URL/file ID, content hash, and timestamp And any subsequent changes create a new immutable record linked to the prior version
Confidence Scoring with Human Review Threshold and Edit/Approve Workflow
Given a generated citation with an associated confidence score When the confidence score is at or above the configured auto-approve threshold Then the citation is marked Ready without requiring human review When the confidence score is below the threshold or conflicting policies are detected Then the citation enters Human Review mode, allowing a reviewer to edit, approve, or reject And the system records reviewer ID, action taken, and timestamp
Fallback Strategy on Policy Retrieval Failure with Logging and User Prompts
Given policy retrieval from the primary source fails or returns no matching policy When the engine attempts secondary sources (cache, alternate endpoints, prior versions) Then it either retrieves a suitable policy or prompts the user to upload/select a policy manually And the attempt history and error details are logged and attached to the appeal record And auto-citation is disabled until an authoritative policy is selected
Metrics Comparator and Evidence Generator
"As a physical therapist, I want the appeal to automatically contrast my patient’s outcomes against payer thresholds and norms so that the reviewer sees objective evidence for overturning the denial."
Description

Aggregates patient metrics from MoveMate (adherence rate, rep counts, range of motion, pain scores, progression trends) and compares them to payer thresholds and cohort norms. Computes deltas and trend summaries, highlights threshold exceedances, and flags gaps. Produces clear tables and charts with labeled units, dates, and data sources. Supports condition-specific measures and configurable comparison windows. Annotates exhibits with source citations and methodology notes. Outputs structured artifacts usable in the appeal pack and retains a snapshot of the data used at generation time.

Acceptance Criteria
Aggregate Patient Metrics within Window
Given a patient ID and a configured comparison window (start, end) in clinic timezone And the patient has recorded adherence, rep counts, ROM, pain scores, and progression data in MoveMate When the evidence generator runs Then it retrieves all metric events within [start, end] inclusive And excludes events outside the window And computes daily and window-level aggregates per metric (mean, median, min, max, last observed) And normalizes units (ROM in degrees, pain on 0-10 scale, adherence as %) And marks missing metrics explicitly as "No data" without failing the run And completes aggregation in <= 2 seconds for <= 6 months of data on a standard dataset And logs the metric counts retrieved per source
Compare Metrics to Payer Thresholds and Cohort Norms
Given payer thresholds and cohort norms are available for the selected condition When comparisons are computed Then for each applicable metric compute delta_to_threshold and percentile_vs_cohort And label each comparison with reference type (Threshold or Cohort), reference value, unit, and effective date And flag exceedances and shortfalls using the measure's directionality (higher-is-better or lower-is-better) And handle missing references by emitting a "Reference unavailable" flag without blocking other comparisons And all comparisons use the configured window aggregates and the same unit conversions And results are reproducible given identical inputs (stable ordering and numeric rounding to 2 decimals)
Generate Labeled Tables and Charts
Given comparison results and aggregates When exhibits are generated Then the table includes columns: Metric, Patient Value, Unit, Window, Threshold/Cohort Reference, Delta, Status, Data Source And dates are formatted as ISO 8601 and include timezone And charts include axis labels with units, legend, and date range And each exhibit embeds a Data sources line naming MoveMate and payer/cohort sources And PNG or SVG images render at >= 150 DPI and width >= 1000 px And visual elements meet WCAG contrast >= 4.5:1 And exhibit generation completes in <= 3 seconds for a standard payload
Condition-Specific Measures and Window Configuration
Given a condition code is selected When metrics are assembled Then only measures mapped to the condition are included And measure definitions (directionality and units) are applied accordingly And the comparison window can be set to presets (30, 60, 90 days) or a custom date range And window validation prevents start > end and ranges > 365 days by default (configurable) And timezone for window boundaries uses the clinic's timezone
Citations and Methodology Annotations
Given exhibits are generated When annotations are added Then footnotes include: patient data source (MoveMate app, dataset version), payer policy citation (payer name, policy ID, section, last updated), cohort methodology (dataset name, N, timeframe, inclusion criteria), and comparison method summary And each footnote is numbered and referenced from the exhibit elements And all citations include accessible URLs or document identifiers And methodology notes state any imputation rules for missing data And the annotation block includes a generation timestamp (ISO 8601)
Structured Export and Snapshot Retention
Given the evidence generation completes When exporting artifacts Then a JSON artifact conforming to schema version v1 with a content SHA-256 is produced And CSV extract for tables and PNG or SVG for charts are included And artifacts are packaged under a unique snapshotId (UUIDv4) with createdAt timestamp And the exact input parameters (patientId, condition, window, policy versions, cohort version) are stored with the snapshot And the snapshot is immutable and retrievable by snapshotId for >= 24 months And PII is limited to necessary patient identifiers per configuration and redacted in share-safe exports And re-running with identical inputs produces identical JSON content hash
Gap Detection and Impact Signaling
Given comparisons are computed When data gaps exist (e.g., metric missing > 20% of days in window or reference unavailable) Then gaps are listed per metric with reason codes (NO_PATIENT_DATA, NO_THRESHOLD, NO_COHORT, OUTSIDE_WINDOW) And affected exhibits display a visible Data gap badge and footnote And gap count and impact on confidence are summarized in the output And the generator returns a success status with warnings, not an error
Guided Appeal Wizard
"As a clinician, I want a guided wizard that assembles the appeal for me so that I can submit a complete, accurate package in minutes without missing required elements."
Description

Provides a step-by-step workflow to turn a denial or information request into a complete appeal. Pre-fills member, provider, claim, and encounter data; detects denial reason and selects matching policy sections and templates. Prompts for clinical exceptions, contraindications, and additional documentation; validates completeness and consistency in real time. Offers a live preview of the assembled appeal pack, highlights missing items, and supports branching logic by payer and denial type. Saves drafts, supports collaboration, and writes all outputs back to the patient record.

Acceptance Criteria
Pre-fill Member and Claim Data on Wizard Launch
Given a denial is selected from the patient record with member, provider, claim, and encounter identifiers When the user opens the Guided Appeal Wizard Then the wizard pre-fills member, provider, claim, and encounter fields from configured sources (EHR/PMS/payer 835/837) with all available values And each pre-filled field displays source and retrieval timestamp And fields restricted by role are masked according to access policy And the pre-fill completes within 2 seconds at the 95th percentile And an audit log records field-level provenance for the pre-fill action
Auto-Detect Denial Reason and Select Policy & Template
Given a denial input is provided via letter (PDF/image) and/or codes (CARC/RARC), payer, and date of service When the wizard parses the denial Then it maps to a normalized denial reason with F1 ≥ 0.95 on the curated validation set And selects the payer policy and specific section(s) in effect on the date of service, citing policy title, version/date, and URL/reference And selects the correct appeal template for the payer and denial type And if confidence < 0.80 or multiple candidates exist, the user is prompted to confirm from ranked suggestions And the final mapping decision and confidence are stored in the audit trail
Capture Clinical Exceptions, Contraindications, and Supporting Documents
Given the payer and denial type are determined When the user reaches the Exceptions step Then the wizard presents dynamic questions for clinical exceptions and contraindications derived from the selected policy And allows attachment of supporting documents (PDF/image/note) with required metadata (type, date, source) And runs anti-malware scan on uploads and rejects infected files And prevents progression until all required items are provided or a justification is entered for any omission And flags inconsistencies between selected exceptions and patient metrics (e.g., contraindication conflicts) for user resolution
Real-Time Completeness and Consistency Validation
Given the user is completing fields in the wizard When any field value changes Then validation executes within 200 ms and updates inline status indicators And mandatory items for the selected payer and denial type are highlighted until satisfied And cross-field rules are enforced (e.g., date of service ≤ claim submission date; CPT codes align with diagnosis; provider NPI format valid) And a completeness score is displayed and reaches 100% only when all required elements are satisfied And final submission is blocked until no blocking errors remain and only non-blocking warnings persist And an accessible error summary lists errors with links to fields
Live Preview of Assembled Appeal Pack with Missing-Item Highlights
Given the user has entered data into the wizard When the user opens the Live Preview Then the system renders the assembled appeal pack within 1 second for typical cases (<20 pages) And the preview reflects the latest inputs without manual refresh And missing or incomplete items are highlighted and are clickable to navigate to the corresponding wizard field And the preview contains auto-cited policy excerpts, patient metrics vs payer thresholds and cohort norms, and exception notes And the user can export the preview to PDF and DOCX with identical content and formatting
Branching Logic by Payer and Denial Type
Given payer X and denial type Y are selected or detected When the user progresses through the wizard Then steps, required fields, prompts, and required documents adapt to the payer- and denial-specific configuration And hidden steps do not contribute validation errors And a breadcrumb displays the active path reflecting payer and denial branches And automated tests cover the top 10 payers and 15 common denial reasons with expected step flows And branching configuration changes can be deployed via data/config update without code changes and take effect on next session
Draft Saving, Collaboration, and Patient Record Write-Back
Given the user is working in the wizard When the user pauses or navigates away Then a draft is auto-saved within 5 seconds and on each field change, with version history retained for at least 30 days And the user can explicitly Save Draft and later Resume on any authorized device And collaborators with appropriate roles can co-edit with real-time presence indicators and edit locking to prevent overwrites; conflicting edits prompt for merge/resolve And an activity log records user, action, timestamp, and affected fields When the user finalizes the appeal Then the signed appeal pack and structured metadata (denial reason, policy citations, metrics summary, exceptions, attachments list) are written back to the patient record and linked to the relevant encounter/claim within 3 seconds at the 95th percentile And the patient record displays the appeal artifact(s) and current appeal status
Export and Submission Packager
"As a billing specialist, I want the appeal pack to be exportable and submit-ready in payer-compliant formats so that I can send it immediately without manual reformatting."
Description

Generates a ready-to-send appeal bundle including cover letter, citations appendix, evidence exhibits, and clinician notes. Exports to PDF with clinic branding and payer-specific formatting, plus machine-readable payloads where supported. Integrates with e-fax, secure email, and payer portal APIs for direct submission; otherwise produces a compliant download with submission instructions. Embeds page numbers, barcodes/reference IDs when required, and verifies attachment readability. Stores submitted artifacts and confirmation receipts in the patient record.

Acceptance Criteria
Generate Branded PDF Appeal Bundle
Given a finalized appeal draft with cover letter, citations appendix, evidence exhibits, clinician notes, and clinic branding assets When the user selects "Generate Bundle" Then a single PDF is produced within 10 seconds containing sections in this order: Cover Letter, Citations Appendix, Evidence Exhibits, Clinician Notes And the PDF conforms to PDF/A-2b and passes preflight validation with 0 errors And clinic logo and colors are applied to header and footer per the clinic branding profile And page numbers are displayed as "Page X of Y" on every page And document bookmarks are present for each section and exhibit
Apply Payer-Specific Formatting and Templates
Given a payer profile for P with formatting rules (margins, fonts, section order, filename convention, required cover fields) When the bundle is generated for payer P Then the output obeys P's section order and required cover letter fields are populated with P's labels and IDs (member ID, claim number, denial code) And the exported filename matches P's filename convention exactly And margins, font family, font size, and line spacing match P's profile within ±2% tolerance And if P requires exhibits as separate files, the generator outputs a main PDF plus one PDF per exhibit with filenames per convention and an index page referencing each
Attach Machine-Readable Payloads When Supported
Given payer P supports a machine-readable payload with schema "AppealPayload v1.2" and transport "API" or "Secure Email" When generating the appeal package for P Then a JSON file conforming to "AppealPayload v1.2" is created and validates against the schema with 0 errors And the payload includes patient, provider, claim, denial reason, metrics summary, policy citations, and a document manifest with SHA-256 checksums And the JSON is attached to the API request body or as a .json email attachment per P's transport And if P does not support machine-readable payloads, no JSON is produced
Direct Submission via e-Fax, Secure Email, or Payer Portal API
Given payer P has submission channels configured and credentials are valid When the user clicks "Submit Now" Then the system selects the highest-priority available channel in order: API, Secure Email, e-Fax And for API submissions, a 2xx response with a submission ID is received or up to 3 retries with exponential backoff are attempted before failing over to the next channel And for Secure Email, the message is sent over TLS 1.2+ with SPF/DKIM passing and an SMTP 250 acceptance code is recorded with a message-id And for e-Fax, a confirmation receipt with fax-id and page count equal to the sent PDF pages is recorded And the UI displays "Submitted" with channel, timestamp, and reference IDs
Fallback Download with Compliant Submission Instructions
Given payer P does not support direct submission or the user selects "Download for Manual Submission" When generating the offline package Then a ZIP file is produced containing the bundle PDF and any required separate exhibit files And a 1-page instruction sheet includes P's mailing address or portal URL, step-by-step submission instructions, and the relevant reference IDs And all filenames and folder structure match P's naming conventions And the download link is available for 7 days, records download events, and auto-expires thereafter
Embed Barcodes and Reference IDs as Required
Given payer P requires barcodes or reference IDs on submissions When generating the bundle for P Then a machine-readable barcode (QR or Code 128 per P's profile) is embedded on the cover letter and first page of each required section And decoding the barcode yields the expected appeal/claim identifiers and checksum And the human-readable reference IDs are printed beneath the barcode And page numbers are included as "Page X of Y" where required by P And if P disallows barcodes, none are rendered
Validate Readability and Archive Submission Artifacts and Receipts
Given the appeal package and attachments are ready for submission When the system performs preflight validation and archives the submission Then each PDF renders without errors and passes OCR with at least 95% text extraction accuracy on text pages And embedded images meet or exceed 200 DPI and total file size per document is within the channel limit from P's profile And if limits are exceeded, the system splits or optimizes the documents automatically while maintaining OCR accuracy ≥95% And after submission, the final PDFs, machine-readable payloads (if any), submission metadata, and confirmation receipts are stored in the patient's record with immutable version IDs and timestamps And artifacts are permission-protected, retrievable within 3 seconds, and displayed on the patient's timeline with channel, timestamps, and reference IDs
Template and Tone Configuration
"As a clinic manager, I want configurable appeal templates per payer and denial type so that our communications stay consistent, on-brand, and effective."
Description

Provides editable templates per payer, plan, and denial type with tokenized fields for auto-fill. Supports clinic branding, adjustable tone (clinical/legal), and localization. Enables A/B variants and shared template libraries across clinics. Includes guardrails to prevent removal of mandatory policy language and required disclosures. Version-controls all template changes and allows quick rollback. Surfaces template recommendations based on past overturn success.

Acceptance Criteria
Auto-Filled Tokenized Fields per Payer/Plan/Denial Type
Given a payer, plan, and denial code are selected and a matching template with tokenized fields exists, When the user clicks Generate Appeal, Then 100% of tokens are auto-filled from patient record, session metrics, clinician profile, and payer policy database, And no unresolved tokens remain in the preview. Given any token cannot be resolved, When Generate Appeal is requested, Then the system blocks sending, highlights each unresolved token in-line, and provides suggested values or required fields to complete. Given multiple templates match a payer–plan–denial combination, When Generate Appeal is requested, Then the highest-priority template per admin rules is applied and the selection is logged with template_id and resolution path. Given policy language tokens (e.g., {policy.section}, {threshold.value}), When Generate Appeal is requested, Then the system inserts the current effective policy excerpt and threshold values for the specific plan with citation (policy_id, section, effective_date).
Clinic Branding and Adjustable Tone Rendering (Clinical/Legal)
Given a clinic branding profile (logo, header/footer, colors) is configured, When a preview or export (PDF/DOCX) is generated, Then the branding is applied consistently on all pages, images render at ≥150 DPI, and color contrasts meet WCAG AA. Given the tone toggle is set to Clinical, When the appeal body is generated, Then clinical phrasing is used and legal citations are summarized, And average sentence length ≤ 25 words. Given the tone toggle is set to Legal, When the appeal body is generated, Then legal phrasing and full citations (policy_id, section) are included, And medical jargon appears in ≤ 10% of sentences. Given no tone is explicitly selected, When generating, Then Clinical is the default.
Localization of Templates (Language, Date/Number Formats)
Given the user selects locale L (e.g., en-US, es-US, fr-CA), When an appeal is generated, Then static strings and headings are localized to L, And dates, times, numbers, and currencies use L’s formats (e.g., mm/dd/yyyy vs dd/mm/yyyy; decimal comma vs point). Given localized templates exist for L, When generating, Then the system uses the L variant; otherwise it falls back to the default locale and shows a non-blocking “Using default locale” notice. Given payer policy excerpts are only available in English, When generating an appeal in a non-English locale, Then the policy excerpt is preserved in English with a local-language preface explaining the original-language citation. Given a right-to-left locale is selected (if supported), When generating, Then layout direction, punctuation, and list ordering correctly mirror for RTL.
A/B Variant Selection with Success Metrics and Recommendation Prompts
Given two active variants (A and B) exist for the same payer–plan–denial, When an appeal is generated, Then the system assigns a variant based on the configured traffic split (default 50/50) and records assignment with case_id, variant_id, and timestamp. Given overturn outcomes are ingested, When sufficient sample size per variant is reached (min N ≥ 30 cases or 90 days, whichever occurs first), Then the dashboard displays overturn rates with 95% confidence/credible intervals per variant. Given variant B shows statistically significant improvement over A (posterior probability of superiority ≥ 0.95 or p < 0.05 per configured method), When a user starts a new appeal for that combo, Then the system recommends B by default and displays a “Recommended” badge with a short rationale. Given a variant is paused, When generating, Then it is excluded from assignment and the traffic split is automatically normalized across remaining active variants.
Shared Template Library Across Clinics with Role-Based Permissions
Given a template is marked Shareable with permissions (Use, Suggest Changes, View Only), When a user from another clinic searches the library, Then the template appears with metadata (owner clinic, version, last updated) and only permitted actions are available. Given a user with Use permission selects a shared template, When generating an appeal, Then the template renders with the consuming clinic’s branding and data tokens, and the source template remains unchanged. Given a user with Suggest Changes permission proposes an edit, When submitting, Then a forked draft is created under their clinic with a link to the source template, and the source owner is notified for review. Given PII is detected in content proposed for sharing, When publishing to the shared library, Then publication is blocked until PII scan passes or the content is redacted, and the reason is shown to the user.
Guardrails Prevent Removal of Mandatory Policy Language/Disclosures
Given a template contains locked mandatory blocks (policy language, required disclosures), When a user attempts to delete or alter them in the editor, Then the action is blocked and an inline message explains the requirement with citation. Given a template is saved, When validation runs, Then the save fails if any required block is missing or altered, and the error list enumerates each violation with block_id and policy reference. Given a final appeal is generated, When exporting or sending, Then a preflight check confirms presence and integrity hash of required blocks; if the check fails, export/send is blocked and a specific error is displayed. Given a policy update changes mandatory text, When the policy snapshot updates, Then the template syncs the locked block to the new text and records the update in version history with policy_id and effective_date.
Version Control: Change History, Diff, and One-Click Rollback
Given a template is edited and saved, When changes are committed, Then a new version is created with version_id, author, timestamp, change summary, and a diff is available versus the prior version. Given a user views history, When selecting any two versions, Then a side-by-side diff highlights additions, deletions, and token-level changes with line numbers. Given the user clicks Rollback on a prior version, When confirmed, Then the template reverts within 2 seconds, becomes the active Published version, and the rollback event is logged with actor, timestamp, and reason. Given Draft and Published states exist, When a draft is present, Then generating appeals uses the latest Published version unless the user explicitly selects “Use Draft” and has the required permission.
Audit Trail and Compliance Logging
"As a compliance officer, I want a detailed audit trail of appeal generation and submission so that we can demonstrate regulatory compliance and defend our decisions during audits."
Description

Captures a complete audit trail of appeal creation and edits, including user actions, timestamps, policy versions cited, and data snapshots. Applies HIPAA-compliant access controls, encryption at rest/in transit, and role-based permissions. Generates immutable hashes for final packs, supports retention policies, and enables exportable audit logs for payer or internal review. Records consent provenance and PHI access events and integrates with existing MoveMate compliance dashboards.

Acceptance Criteria
Capture Appeal Creation and Edit Events
Given a permitted user creates a new appeal draft When they save the draft Then an audit event is written with fields: event_id (UUIDv4), tenant_id, user_id, user_role, action=create_draft, appeal_id, patient_id, timestamp_utc (ISO-8601), request_id, policy_version_ids (array, may be empty), snapshot_before=null, snapshot_after_ref (encrypted payload pointer), snapshot_after_hash (SHA-256), and write_result=acknowledged And the event store is append-only with no update/delete operations exposed Given a permitted user edits an existing appeal (e.g., attachment added, clinical metric changed) When they save changes Then an audit event records action=edit with field_diffs, updated snapshot_after_ref and snapshot_after_hash, and correlates to prior version via previous_event_id And idempotency is enforced by request_id to prevent duplicate events Given system clock is synchronized When an audit event is recorded Then timestamp drift from NTP is ≤ 2 seconds
Immutable Hashing of Final Appeal Packs
Given an appeal is finalized When the final appeal pack (PDF + attachments) is generated Then a SHA-256 hash over the canonicalized package bytes is computed, stored in the audit log, and embedded in the pack metadata And the hash is written to WORM/append-only storage and cannot be altered Given the same final pack is re-generated without content changes When the hash is recomputed Then the hash matches the stored value; otherwise a mismatch is raised as a high-severity alert and the download is blocked pending review
Role-Based Access Controls for Audit Logs
Given role-based permissions are configured When a Clinician or Author requests audit logs Then only events they initiated are visible and any data snapshots are redacted Given a Compliance Admin requests audit logs When access is granted Then full event details, including PHI snapshots where permitted, are visible Given a Security Analyst requests audit logs When access is granted Then only metadata (no PHI payloads) is visible Given any user attempts unauthorized access to audit logs When the request is evaluated Then access is denied with HTTP 403 and an access_denied audit event is recorded including user_id, role, target_resource, timestamp, and reason
Encryption In Transit and At Rest for Audit Data
Given audit data is stored at rest Then it is encrypted using AES-256 with managed KMS keys rotated at least every 90 days and access controlled by least privilege IAM policies Given audit data is transmitted between services or to clients When transmission occurs Then TLS 1.2+ is enforced with strong cipher suites; plaintext or downgraded connections are rejected and logged Given backups and exports of audit data are created When they are written Then they are encrypted at rest with unique keys and governed by the same access controls
Consent Provenance and PHI Access Event Recording
Given a user views or downloads patient PHI within Appeal Builder When access occurs Then an audit event is recorded with consent_record_id, consent_scope, legal_basis, purpose_of_use (from controlled list), access_reason, patient_id, user_id, and timestamp_utc Given no valid consent exists or consent is expired When a user attempts PHI access Then access is blocked, an access_denied event is logged, and the user receives a message indicating missing/expired consent Given an emergency override is required When a Compliance Admin initiates a break-glass access with justification and second approver confirmation Then access is temporarily granted, the override window is time-bounded, and an override audit event records both approvers, justification, start/end times, and all PHI resources accessed
Retention Policy Enforcement and Purge
Given a tenant-level retention policy is configured (≥ 6 years unless legal hold) When audit events exceed the retention period and no legal hold applies Then a scheduled purge permanently deletes the encrypted payloads and metadata and writes an immutable deletion receipt containing event_ids, content_hashes, purged_at, and actor=system Given a legal hold is in place for a patient, appeal, or time range When the purge job runs Then held events are excluded from deletion and the hold is visible in audit metadata Given a purge completes Then the compliance dashboard shows purged_count, failure_count, and last_run time; purged data is non-recoverable
Exportable Audit Log Package and Dashboard Integration
Given a Compliance Admin applies filters (date range, patient_id, appeal_id, action types) When they request an export Then a CSV and JSONL export is generated within 2 minutes for up to 500,000 events, including fields: event_id, tenant_id, user_id, role, action, timestamp_utc, object_ids, policy_version_ids, consent_record_id, snapshot_hash, and previous_event_id And the export includes a signed manifest with SHA-256 checksums per file and a detached signature for integrity verification Given the MoveMate compliance dashboard is open When new audit events are produced Then the dashboard reflects metrics and latest events within 60 seconds and supports drill-down to raw event detail subject to RBAC

FHIR Preflight

Validates every bundle before submission—codes, attachments, and structure—against payer endpoints for PAS/DocumentReference compatibility. Catches schema errors, missing identifiers, and oversized media early, so packets land right the first time with fewer rejections or portal re‑uploads.

Requirements

Real-time FHIR Bundle Schema Validation
"As a clinic admin, I want automatic FHIR schema checking before submission so that I catch structural errors early and avoid payer rejections."
Description

Validates PAS and DocumentReference bundles against HL7 FHIR R4 structure definitions and payer-specific profiles before transmission. Checks resource types, required elements, cardinality, references, and data types, mapping errors to exact JSONPath locations. Runs within MoveMate’s submission pipeline and as an API for batch jobs, preventing sends until critical issues are resolved and reducing payer rejections.

Acceptance Criteria
Core R4 Structure Validation for PAS and DocumentReference
Given a FHIR R4 Bundle containing PAS and/or DocumentReference resources that conform to base R4 StructureDefinitions When the bundle is validated Then the validator returns an OperationOutcome with zero issues of severity "error" or "fatal" And the bundle is marked valid for core structure Given a bundle that contains a resource with elements not allowed by its R4 StructureDefinition or an invalid bundle.type for the use case When the bundle is validated Then the validator returns an OperationOutcome with at least one issue having code "invalid", severity "error", and issue.expression containing the JSONPath of the offending element
Required Elements and Cardinality Enforcement
Given a bundle where any required (min=1) element (e.g., Patient.identifier, DocumentReference.content) is missing When the bundle is validated Then the validator reports an OperationOutcome issue with severity "error", code "required", and a JSONPath pointing to the missing element location Given a bundle where an element exceeds its max cardinality (e.g., Claim.patient 0..1 provided twice) When the bundle is validated Then the validator reports an OperationOutcome issue with severity "error", code "structure", and a JSONPath pointing to the repeated element Given a bundle where all required elements are present and within cardinality When the bundle is validated Then no "error" or "fatal" issues are returned for required/cardinality checks
Intra-Bundle Reference Integrity
Given a bundle where any Reference.reference points to a resource not present in the bundle and not resolvable by fullUrl When validated Then the validator reports an OperationOutcome issue with severity "error", code "not-found", and the JSONPath of the unresolved reference Given a bundle with duplicate fullUrl values or duplicate resource ids that cause ambiguous references When validated Then the validator reports an OperationOutcome issue with severity "error", code "duplicate", and JSONPaths for each conflicting occurrence Given a bundle with all references resolvable and unique fullUrl values When validated Then no reference integrity issues of severity "error" or "fatal" are returned
Data Types and Terminology Validation
Given elements with constrained data types (e.g., dateTime, Identifier, Coding) that do not match FHIR R4 formats (e.g., invalid date, missing system for required Coding) When validated Then the validator reports OperationOutcome issues with appropriate codes (e.g., "invalid", "value") and JSONPaths to the invalid values Given elements bound to required or extensible ValueSets in base or PAS/DocumentReference contexts and codes fall outside the ValueSet When validated Then the validator reports an issue of severity "error" for required bindings and at least a "warning" for extensible bindings, including the ValueSet URL in diagnostics Given all data types and code bindings are valid When validated Then no data-type or terminology issues of severity "error" or "fatal" are returned
Error Reporting with JSONPath and Severity Classification
Given any validation failures are detected When the validator returns its OperationOutcome Then each OperationOutcome.issue includes: severity (one of information|warning|error|fatal), code, diagnostics text, and issue.expression containing the exact JSONPath to the offending element or missing path And the response includes aggregate counts of issues by severity Given multiple issues are found When the OperationOutcome is returned Then issues are sorted by severity (fatal/error before warning/information) and stable by JSONPath within severity
Payer-Specific Profile and Endpoint Preflight
Given a payer endpoint configuration with specific StructureDefinition/profile URLs and optional $validate capability When a bundle is validated Then the bundle is checked against the configured profiles (local validation) and, if enabled and reachable, remotely validated via $validate, combining results into a single OperationOutcome Given the remote $validate call is unreachable or times out within 3 seconds When validation completes Then local validation results are returned with an additional "warning" issue indicating remote validation was skipped due to endpoint unavailability, without blocking submission solely for that reason Given any payer-specific profile yields issues of severity "error" or "fatal" When validation completes Then the bundle is marked invalid for that payer and must not proceed to transmission
Submission Pipeline Gate and Batch API Processing
Given the submission pipeline attempts to send a bundle When validation returns any OperationOutcome.issue with severity "error" or "fatal" Then transmission is blocked, the submission is marked "blocked_by_validation", and the UI/API exposes the OperationOutcome for remediation Given the validation API receives a batch of up to 100 bundles with client-provided correlationIds When processed Then the API returns a result per bundle containing the correlationId and an OperationOutcome, without a single failure preventing validation of other bundles Given typical bundles (≤30 resources, total payload ≤2 MB, no remote $validate) When validated via API Then p95 end-to-end validation latency per bundle is ≤700 ms
Terminology and Code Set Validation
"As a physical therapist, I want code validation against current code sets and plan rules so that my prior auth requests are compliant and approved faster."
Description

Confirms that clinical and billing codes used in bundles (ICD-10-CM diagnoses, CPT/HCPCS procedures, SNOMED CT, and LOINC where applicable) are valid on service dates, not deprecated, and allowed by the payer plan when coverage rules are published. Supports expansion of value sets referenced by payer profiles, with local caching and fallbacks when external terminology services are unavailable. Flags mismatches between codes and resource fields, providing corrective hints.

Acceptance Criteria
Valid Codes on Service Dates
Given a FHIR bundle containing coded elements in Condition.code, Procedure.code, Observation.code, ServiceRequest.code, and Claim.item.productOrService with service dates present (e.g., Encounter.period, Procedure.performed[x], Observation.effective[x], Claim.item.serviced[x]) And code systems include ICD-10-CM, CPT, HCPCS, SNOMED CT, and LOINC When Terminology and Code Set Validation executes Then each code’s system URI must be canonical (ICD-10-CM=http://hl7.org/fhir/sid/icd-10-cm, CPT=http://www.ama-assn.org/go/cpt, HCPCS=http://www.cms.gov/Medicare/Coding/HCPCSLevelII, SNOMED CT=http://snomed.info/sct, LOINC=http://loinc.org) And the code must exist in the applicable CodeSystem version effective on the service date (using the element’s version if supplied, otherwise resolving by date) And deprecated/inactive-on-date codes are flagged as error E-CODE-001 And unknown/nonexistent codes are flagged as error E-CODE-002 And each finding includes resourceType/id, FHIRPath location, code, system, version (if any), service date evaluated, and severity And the overall result is Pass for this scenario only if zero errors are produced
Payer Plan Coverage Rule Check
Given a bundle with Coverage identifying payer and plan, and payer-published ValueSets for allowed diagnoses/procedures discovered via the payer endpoint When validating Claim.item.productOrService and Procedure.code against the plan’s allowed ValueSets Then codes contained in those elements must be members of the plan’s allowed ValueSets; matching is version-aware And disallowed codes are flagged as error E-COVER-002 including the ValueSet canonical URL and plan identifier And if payer coverage rules are unavailable or not published, record warning W-COVER-001 and mark the plan-allowance assertion as skipped without failing the bundle And the summary reports counts of allowed vs disallowed codes per plan
ValueSet Expansion with Cache and Fallback
Given a payer or profile-referenced ValueSet requires expansion for validation And the remote terminology service is responsive When expansion is requested Then the remote expansion is used and cached locally keyed by ValueSet canonical + version/compose hash with a TTL of 24 hours Given the remote terminology service is unavailable or exceeds a 2,000 ms timeout When expansion is requested Then the most recent cached expansion is used if within TTL and checks proceed with fallbackUsed=true noted in the result And if the cache is older than TTL, proceed with a warning W-TERM-STALE and include cacheAge And if no cache exists, mark impacted checks as indeterminate I-TERM-001 without blocking unrelated validations
Code-System-to-Resource Field Consistency
Given payer/profile bindings that require specific code systems or ValueSets for elements (e.g., Condition.code requires ICD-10-CM per profile; Claim.item.productOrService requires CPT/HCPCS; Observation.code requires LOINC when quantitative lab values are present) When validating coded elements in the bundle Then any element bound with strength=required that contains a code from a non-allowed system is flagged as error E-BIND-REQ with expected systems/ValueSets listed And any element bound with strength=extensible that contains a non-allowed system is flagged as warning W-BIND-EXT And each mismatch includes a corrective hint with the expected system(s) and, where cross-maps are available, up to 3 suggested target codes And findings specify the exact FHIRPath location, offending system/code, expected canonical(s), and profile URL that defined the binding
Actionable Validation Output
Given any terminology validation findings are produced When returning results Then each finding includes fields: issueCode (e.g., E-CODE-001), severity (error|warning|info|indeterminate), message, hint, system, code, version, serviceDateUsed, resourceType/id, FHIRPath, and docsUrl (payer/profile or code system reference) And the run summary includes totals by severity and by codeSystem, and an overall outcome of Pass | PassWithWarnings | Fail And the outcome is Fail if any error exists; PassWithWarnings if warnings exist and no errors; Pass if neither errors nor warnings exist
Resilience and Performance Under Terminology Outage
Given a PAS bundle containing 100–200 resources and up to 500 coded elements When validating with the remote terminology service available Then p95 end-to-end terminology validation completes in ≤ 3,000 ms on the standard staging environment Given the same bundle and the remote terminology service is unavailable When validating with cache fallback Then p95 validation completes in ≤ 2,000 ms when cached expansions exist And no additional errors are introduced solely due to the outage (only warnings/indeterminate related to fallback are permitted)
Payer Capability Discovery and Profile Matching
"As a revenue cycle manager, I want the system to adapt to each payer’s capabilities so that we submit compatible bundles the first time."
Description

Retrieves and caches payer CapabilityStatements, PAS IG conformance details, supported operations, required profiles, extensions, OAuth scopes, and attachment limits to tailor preflight rules per endpoint. Automatically selects the correct profile set for each submission based on payer, plan, and region, and fails fast when an endpoint lacks required capabilities. Refreshes metadata on schedule and on version change.

Acceptance Criteria
CapabilityStatement Retrieval and Caching
Given a configured payer base URL with optional plan and region identifiers When the system requests the FHIR CapabilityStatement and SMART configuration endpoints Then it receives a 200 response within 10 seconds and stores both raw payloads and parsed fields in cache keyed by payer+plan+region And cache entries include ETag/Last-Modified (if provided), fetchedAt, source URLs, and advertised FHIR version/IG versions And on 4xx/5xx or invalid payload, the system records error PRE-DISC-001, marks the endpoint unavailable, and uses last-known-good cache if not older than the configured TTL (default 7 days); otherwise preflight is blocked
Profile Set Selection by Payer/Plan/Region
Given a submission includes payer, plan, and region identifiers When preflight starts Then the system selects the highest compatible PAS profile set advertised for that endpoint And the selection includes required resource profiles, extensions, supported operations, OAuth scopes, and attachment constraints And the selected profile URLs and versions are logged and attached to the preflight report And if no compatible profile set exists, the system fails within 1 second before schema validation with error PRE-MATCH-001 and actionable guidance
PAS Operations and Conformance Validation
Given discovered endpoint capabilities and PAS IG conformance details When building preflight rules for a prior-authorization submission Then the system verifies support for required PAS operations (e.g., Claim/$submit and status retrieval), resource types, and mandated profiles/extensions And it verifies the advertised FHIR version matches the bundle's FHIR version And if any required operation, profile, or version compatibility is missing, preflight blocks submission with error PRE-OPS-001 and cites missing items
Attachment Limits and MIME Enforcement
Given discovered attachment policies (max per-attachment size, total bundle size, max count, and allowed MIME types) When evaluating DocumentReference resources in the bundle Then each attachment's size (base64-decoded) is <= the per-attachment limit, and the sum of attachments is <= the total bundle limit And the number of attachments does not exceed the maximum, and all MIME types are in the allowed list And if any constraint would be violated, preflight either auto-compresses/transcodes (when configured) to within limits or fails with PRE-ATT-001 before submission And a warning PRE-ATT-WARN is added when any attachment exceeds 80% of the per-attachment limit
OAuth Scope Discovery and Validation
Given SMART/OAuth discovery exposes required scopes for PAS operations and resources When requesting an access token for the target payer endpoint Then the requested scopes include all required scopes; tokens and scopes are cached per payer+plan+region with expiry honored and refresh attempted 60 seconds before expiry And if the granted token lacks required scopes, the system re-requests with required scopes when permitted; otherwise preflight fails with PRE-AUTH-002 and blocks submission And scope and token acquisition failures are logged with correlation IDs and do not attempt transmission
Scheduled and Event-Driven Metadata Refresh
Given a configured refresh schedule (e.g., every 24 hours) and change detection via ETag/Last-Modified or version fields When a refresh cycle runs or a version change is detected during capability fetch Then the system refetches and updates capability caches atomically, re-computes profile mappings, and invalidates stale rules And queued submissions are re-evaluated against updated rules within 5 minutes And on repeated errors, refresh uses exponential backoff and caps to <= 1 request/minute per endpoint until recovery
Attachment Size, Format, and Compression Guardrails
"As a clinician, I want attachment size and format checks with compression options so that supporting documents are accepted without manual rework."
Description

Enforces payer-specific constraints on DocumentReference attachments, including MIME types, file extensions, individual and aggregate size limits, and binary encoding. Validates presence of mandatory metadata (type, category, author, date), verifies readable PDF/A for documents and resolution for images, and offers lossless or configurable compression when payloads exceed limits. Provides pre-send thumbnails and redaction tools for exercise imagery from MoveMate.

Acceptance Criteria
Payer-Specific MIME Type and Extension Validation
Given a FHIR DocumentReference with one or more attachments and a selected payer profile with an allowlist of MIME types and file extensions When FHIR Preflight validation runs Then each attachment.contentType SHALL be in the payer allowlist And each attachment file extension SHALL correspond to the declared MIME type per system mapping And any attachment with a disallowed MIME type or mismatched extension SHALL be flagged as Error and block submission And the error SHALL include the attachment identifier, declared type, detected type (via file signature), and the payer-accepted alternatives
Individual and Aggregate Size Limits with Auto-Compression
Given a bundle containing DocumentReference attachments and payer-specific limits for max attachment size and max aggregate bundle size When FHIR Preflight validation runs Then any attachment exceeding the individual limit SHALL be losslessly compressed where possible (e.g., PNG optimization, PDF object stream compression) And if the compressed result still exceeds the limit, the user SHALL be prompted to apply the configured lossy profile(s) with size estimate and preview And if after configured compression the file still exceeds the limit, the attachment SHALL be flagged as Error and block submission And aggregate bundle size SHALL be reduced via per-attachment compression proposals until under limit or else flagged as Error And post-compression files SHALL retain correct contentType and open successfully; the preflight report SHALL log original vs final size and compression method
Base64 Encoding, URL Fetch, and Type Integrity
Given an attachment provided inline via attachment.data When FHIR Preflight validation runs Then the data SHALL be valid base64 and decode without error And attachment.size SHALL equal the decoded byte length And the decoded bytes’ magic number SHALL match the declared attachment.contentType; mismatches SHALL be flagged as Error Given an attachment provided via attachment.url When FHIR Preflight validation runs Then the URL SHALL be HTTPS, resolvable, and return a Content-Type compatible with the payer allowlist And size SHALL be verified via HEAD or range request before full download; if size is unknown or exceeds limits, the attachment SHALL be flagged as Error And attachment SHALL not include both data and url simultaneously
Mandatory DocumentReference Metadata Validation
Given a DocumentReference prepared for submission When FHIR Preflight validation runs Then DocumentReference.type (CodeableConcept), category, author, and date fields SHALL be present and valid per payer profile And author.reference SHALL resolve to a Practitioner or Organization present in the bundle And date SHALL be an RFC 3339 timestamp not in the future And missing or invalid required metadata SHALL be flagged as Error and block submission And if title is missing, it SHALL be auto-populated from the source filename without exceeding payer title length constraints
PDF/A Conformance and Readability Check
Given an attachment with contentType application/pdf When FHIR Preflight validation runs Then the PDF SHALL open without password, not be corrupted, and conform to payer-required PDF/A level (e.g., PDF/A-1b or higher) And prohibited features (embedded files, JavaScript, multimedia, encryption) SHALL be absent or removed during compression; if removal is not possible, flag as Error And the validator SHALL report page count and dimensions; if OCR is required by payer and text is not extractable, flag as Warning or Error per profile
Image Resolution, EXIF Sanitization, and Orientation
Given an image attachment with contentType image/jpeg or image/png When FHIR Preflight validation runs Then the image SHALL meet payer-configured minimum dimensions (e.g., >= 1024x768 px) or minimum DPI (e.g., >= 150 DPI) And orientation SHALL be normalized so the rendered image is upright And EXIF containing GPS or device identifiers SHALL be stripped prior to submission And if dimensions exceed payer maximums, downscaling SHALL preserve readability while meeting size and format limits; failures SHALL be flagged as Error or Warning per profile
Pre-Send Thumbnails and Redaction Application
Given a user reviews attachments in the preflight UI When they open an image or PDF preview Then a thumbnail/preview SHALL render within 1 second for images up to 5 MB and within 2 seconds for PDFs up to 20 pages And the user SHALL be able to apply rectangle blackout/blur and face-blur redactions and see the result immediately And upon save, redactions SHALL be applied to the outbound attachment and thumbnails; unredacted versions SHALL not be sent And the system SHALL log redaction actions with timestamp and user ID And generated thumbnails SHALL not be included in the bundle unless explicitly permitted by payer and SHALL respect size limits
Identifier and Credentials Verification
"As a billing coordinator, I want identifier and credential verification so that submissions aren’t rejected for missing or invalid IDs."
Description

Verifies presence and format of required identifiers and credentials prior to submission, including patient member ID, payer plan ID, organization and practitioner NPIs, taxonomy codes, referral numbers, and rendering versus supervising roles. Confirms OAuth client configuration and scopes for the target endpoint and checks that identifiers are correctly placed across resources and references to avoid cross-link failures.

Acceptance Criteria
Required Identifier Presence and Format
Given a FHIR R4 bundle prepared for PAS/DocumentReference submission to a configured payer endpoint When Preflight validates required identifiers Then it shall fail with explicit errors for each missing or malformed item: patient member ID (Coverage.subscriberId or Coverage.identifier with payer system), payer plan ID (Coverage.class where class.type.coding.code = plan), organization NPI (Organization.identifier with system http://hl7.org/fhir/sid/us-npi and a valid 10-digit value), practitioner NPI (Practitioner.identifier with system http://hl7.org/fhir/sid/us-npi), and referral number when the endpoint requires it And it shall pass when all required identifiers are present, use correct identifier.system URIs, and satisfy endpoint format rules
Rendering vs Supervising Provider Role Validation
Given the bundle contains provider roles and care team entries When Preflight inspects Claim.careTeam and PractitionerRole resources Then exactly one rendering provider shall be designated and referenced in Claim.careTeam with an endpoint-accepted rendering role code And an optional supervising provider may be present with an endpoint-accepted supervising role code and must not reference the same Practitioner as the rendering provider And each provider reference shall resolve to a Practitioner with a valid NPI and to a PractitionerRole linked to the correct Organization
Provider Taxonomy Code Verification
Given practitioner and/or organization taxonomy codes are provided or required by the endpoint When Preflight validates taxonomy Then each taxonomy code shall exist in the configured NUCC taxonomy set and be active on the submission date And taxonomy shall be placed on PractitionerRole.specialty and/or Organization.type per endpoint mapping with system http://nucc.org/provider-taxonomy And failures shall identify the invalid code and the JSONPath to the offending element
OAuth Client Configuration and Scopes Verification
Given a target payer endpoint with SMART on FHIR or OAuth 2.0 configuration When Preflight validates OAuth client credentials and scopes Then the configuration shall include client_id, token endpoint, auth method, and grant type appropriate for the endpoint And access token acquisition (live or dry-run validation) shall succeed or an existing token shall be unexpired and have an audience matching the endpoint And the configured or returned scopes shall include all endpoint-required scopes for PAS and DocumentReference operations; otherwise Preflight shall block submission and report insufficient_scope with the missing scopes
Identifier Placement and Cross-Resource Reference Integrity
Given the bundle includes Patient, Coverage, Organization, Practitioner, PractitionerRole, Claim, and any DocumentReference When Preflight validates identifier placement and references Then member ID shall be present in Coverage.subscriberId or Coverage.identifier with the payer system And payer plan ID shall be present in Coverage.class where class.type.coding.code = plan And Claim.patient shall reference Patient, Claim.insurance.coverage shall reference Coverage, PractitionerRole.practitioner shall reference Practitioner, PractitionerRole.organization and DocumentReference.custodian shall reference Organization And all references shall resolve within the bundle and all identifier.system values shall be canonical and non-empty
Endpoint-Specific Identifier Rules from Capability Statement
Given the payer endpoint exposes a FHIR CapabilityStatement or a configured endpoint rule set When Preflight applies endpoint-specific rules Then it shall enforce required identifiers, paths, and profile bindings declared for PAS and DocumentReference for that endpoint And it shall reject identifiers placed in disallowed paths for that endpoint And it shall record the endpoint rule version used in the preflight report
Preflight Report, Severity, and Submission Gate
"As an operations lead, I want a clear preflight report and a submission gate so that our team can remediate issues quickly and maintain auditability."
Description

Generates a consolidated, human-readable and machine-readable preflight report with error and warning severities, FHIR issue codes, impacted resource paths, and remediation steps. Surfaces results in the MoveMate clinician/admin dashboard, exposes a REST endpoint and webhook for automation, and blocks submission on critical errors while allowing override with rationale for warnings. Stores reports in the patient’s audit log for compliance.

Acceptance Criteria
Dashboard Preflight Report Rendering
Given a validated bundle with mixed issues When the clinician opens the patient’s Preflight panel Then the report renders within 2 seconds and shows total counts by severity (fatal, error, warning, information). Then each issue shows severity, FHIR issue code, impacted resource path(s)/expression, and a remediation step in plain language. Then issues are sortable by severity, resource type, and issue code, and filterable to “blocking only”. Then any oversized attachments display file name and size alongside max allowed size for the payer. Then a “Download JSON” action provides the machine-readable report.
REST Endpoint: Machine-Readable Preflight Report
Given a valid preflightId and OAuth scope preflight.read When GET /preflight/{id} is called Then respond 200 application/fhir+json with a FHIR OperationOutcome conformant body containing issue.severity, issue.code, issue.expression or location, and issue.diagnostics including remediation text. Then include a top-level summary object with totals by severity and overallStatus ∈ {blocking, warning-only, clear}. When preflightId does not exist Then respond 404 with OperationOutcome.issue.code = not-found. When auth is missing or insufficient Then respond 401/403 respectively.
Webhook Delivery on Preflight Completion
Given a clinic has configured a webhook URL When a preflight completes Then POST a JSON payload containing preflightId, patientId, bundleId, payerId, overallStatus, severityTotals, completedAt, and a URL to the report. Then deliveries occur within 10 seconds of completion. Then on non-2xx response, retry up to 5 times with exponential backoff starting at 30 seconds; stop after first 2xx. Then the payload conforms to a published JSON schema; invalid payloads are rejected by the receiver with a 4xx and retried as per policy.
Submission Gate on Blocking Issues
Given any OperationOutcome.issue.severity ∈ {fatal, error} exists for the current bundle When a user attempts PAS/DocumentReference submission Then the UI Submit action is disabled and displays “Resolve blocking issues before submission”. When submission is attempted via API Then respond 409 with an OperationOutcome including the blocking issues. When only warning/information issues remain Then the UI Submit action is enabled with an “Override with rationale” flow.
Warning Override With Rationale Capture
Given only warning/information issues remain When the user selects Override with rationale Then the system requires a free-text rationale (minimum 10 characters) and records userId, timestamp, and list of warning issue identifiers. Then submission proceeds and the rationale and preflightId are attached to the PAS/DocumentReference transaction metadata and patient audit log entry. When rationale is missing or too short Then block override and display validation error; API requests without overrideRationale are rejected with 400.
Audit Log Persistence and Retrieval
Given a preflight run completes When results are finalized Then the full machine-readable report and a rendered snapshot are stored in the patient’s immutable audit log with preflightId, bundle hash, payer endpoint, outcome summary, actor, and createdAt. Then audit entries are read-only and retrievable via UI and GET /patients/{id}/audit filtered by type=preflight. Then downloading the stored report returns the original JSON exactly (checksum match).
Validation Coverage: Codes, Structure, Attachments
Given a PAS/DocumentReference bundle with any of: invalid code system/code, missing required identifier, schema violations, or attachments exceeding payer max size When preflight runs Then issues are emitted respectively with severities fatal/error, FHIR codes invalid/required/structure/too-long (or equivalent), and expressions pointing to the exact element. Then each issue includes a remediation step that names the correct code system or required field, or states the allowed max size. Then the impacted resourceId and element path are included for every issue.

Auth Tracker

Tracks each authorization from submission to closure with IDs, timestamps, and SLA countdowns. Notifies coordinators when due dates approach or evidence is requested, and logs a clean, shareable timeline that makes status checks fast and case closures tidy.

Requirements

Authorization Intake & Metadata Capture
"As an authorization coordinator, I want to record all relevant details for a new payer authorization in one place so that I can track it accurately and avoid rework later."
Description

Capture and validate all core authorization identifiers and context upon creation or import, including payer, plan, member ID, patient, episode of care, CPT/HCPCS codes, ICD-10 diagnoses, requested units/visits, effective dates, servicing provider, clinic location, submission channel (portal/fax/phone/API), and payer authorization number when available. Support creation from template, manual entry, or CSV/EHR import. Link the authorization to the patient’s MoveMate treatment plan and exercise program so utilization and adherence can be surfaced alongside status. Auto-generate a unique internal ID, persist initial timestamps (created, submitted), and allow attaching supporting documentation at intake. Enforce required fields, format validation, and a data model that supports multiple authorizations per patient, renewals/extensions, and cross-linking to visits.

Acceptance Criteria
Manual Intake: Required Fields & Validation
Given I am creating a new authorization via Manual Entry When I attempt to save with any required field missing (payer, plan, member ID, patient, episode of care, CPT/HCPCS codes, ICD-10 diagnoses, requested units/visits, effective start date, effective end date, servicing provider, clinic location, submission channel) Then the record is not saved, each missing field is outlined with an inline error, and an error banner lists all missing fields And the submission channel value must be one of {Portal, Fax, Phone, API, Manual} And requested units/visits must be a positive integer (> 0) And the effective end date must be the same as or later than the effective start date And CPT codes must be 5-digit numeric; HCPCS codes must be 1 letter followed by 4 digits And each ICD-10 code must match the ICD-10-CM 2025 format rules And the payer authorization number is optional at intake; when provided, it must be non-empty and ≤ 64 characters
CSV/EHR Import: Mapping, Validation, and Partial Success
Given I import authorizations via CSV or EHR export When I map incoming fields to internal schema (payer, plan, member ID, patient, episode of care, CPT/HCPCS, ICD-10, requested units/visits, effective dates, servicing provider, clinic location, submission channel, payer authorization number) Then the system validates required fields and formats per the same rules as manual entry And if a required column is not mapped, the import is blocked with a specific message naming the unmapped columns And if some rows fail validation, valid rows are imported and invalid rows are rejected with a downloadable error report including row number and reason(s) And the import summary displays total rows processed, succeeded, and failed And date fields accept ISO 8601 (YYYY-MM-DD) or MM/DD/YYYY and are normalized to UTC And duplicates detected by composite key {payer, plan, member ID, patient, episode of care, payer authorization number (if present)} are skipped with a duplicate notice
Unique Internal ID and Initial Timestamps
Given an authorization is created by any method (manual, template, CSV/EHR import) When the record is saved Then the system assigns a globally unique, immutable internal authorization ID And created_at is persisted in UTC with millisecond precision And if a submitted date is provided at intake, submitted_at is persisted to that value in UTC; otherwise submitted_at remains null And uniqueness constraints prevent any duplicate internal authorization ID
Link to Patient Treatment Plan and Exercise Program
Given a patient and episode of care are selected during intake When I save the authorization Then the authorization is linked to the patient record, the active treatment plan for that episode, and the current exercise program And the authorization detail view displays Utilization-to-Date (visits used vs. requested) and Exercise Adherence summaries (e.g., 7-day and 30-day completion rates) sourced from the linked records And the patient linkage is immutable after save; changing the episode updates the linkage while preserving historical references
Supporting Documentation Attachment at Intake
Given I am on the authorization intake screen When I attach supporting documents Then the system accepts multiple files of types PDF, TIFF/TIF, JPG/JPEG, PNG And each file must be ≤ 25 MB and the total upload size ≤ 100 MB And upload progress is shown per file; failed uploads display a specific error message And files are scanned for viruses; any failed scan is rejected and not stored And on success, each attachment is listed with filename, type, size, uploaded_by, and uploaded_at, with inline preview for PDFs/images where supported
Multiple Authorizations, Renewals/Extensions, and Visit Cross-Linking
Given a patient may require more than one authorization When I create additional authorizations with overlapping effective dates for the same episode Then the system allows creation and displays a non-blocking overlap warning And when creating a renewal/extension, I can select a prior authorization to carry forward payer, plan, codes, provider, and clinic, and the new record stores a renewal_of reference And visit records can be linked to exactly one authorization at a time; linking a visit increments that authorization’s utilization counter And when utilization would exceed requested units/visits, the system prevents additional visit linking unless an override reason is recorded
Template-Based Creation with Pre-Filled Metadata
Given I create an authorization from a saved template When the template is applied Then payer, plan, CPT/HCPCS, ICD-10, servicing provider, clinic location, and submission channel are pre-filled from the template And all pre-filled values are validated using the same rules as manual entry And any required field not provided by the template must be completed before save And the saved authorization records template_id and template_version used
Lifecycle State Machine & Timestamps
"As a clinic admin, I want consistent statuses and automatic timestamps so that our team can see exactly where each authorization stands without guesswork."
Description

Implement a normalized authorization lifecycle with explicit states and valid transitions (e.g., Draft, Submitted, Acknowledged, Pending Review, Info Requested, Resubmitted, Approved, Partially Approved, Denied, Appealed, Expired, Closed). Auto-stamp each transition with timestamp, actor, and source (manual, API, webhook) and preserve an immutable event log with before/after values. Expose current state, previous state, days in state, and overall case age to list/detail views and APIs. Support pausing/resuming SLA timers during Info Requested and automatically updating derived metrics when transitions occur.

Acceptance Criteria
Enforce Valid Lifecycle Transitions
Given an authorization in Draft When a transition to Submitted is requested via POST /authorizations/{id}/transitions Then the state changes to Submitted, previous_state is Draft, days_in_state resets to 0, and a transition event is appended Given an authorization in Approved When a transition to Submitted is requested Then the request is rejected with HTTP 409 Conflict, no event is created, and the state remains Approved Given the allowed transition map per lifecycle definition When any transition outside the allowed map is attempted Then it is rejected with HTTP 409 Conflict, no event is created, and a validation error is returned
Transition Metadata Auto-Stamping (Timestamp, Actor, Source)
Given a transition from Submitted to Acknowledged is received via webhook When the event is processed Then the event record includes timestamp in UTC ISO-8601 (ends with 'Z'), actor_id is populated, source="webhook", before_state="Submitted", after_state="Acknowledged" Given a transition is performed by an authenticated user through the UI When the transition completes Then source="manual" and actor_id equals the user's id Given a transition is performed via the public API using a service token When the transition completes Then source="api" and actor_id equals the calling client/service id Given multiple transitions occur for the same authorization When events are inspected Then their timestamps are monotonically non-decreasing
Immutable Event Log with Before/After Values
Given transition events exist for an authorization When a client attempts to update or delete an event via any API or admin interface Then the operation is rejected (HTTP 403 or 405), no changes are made, and a security audit entry is recorded Given transition events exist When GET /authorizations/{id}/events is called Then each entry is append-only, ordered by created_at ascending, and contains: id, timestamp, actor_id, source, before_values.state, after_values.state, and keys for any other fields whose values changed Given a new valid transition occurs When the event log is fetched Then a new entry appears at the end with correct before/after values and no prior entries were altered
State Attributes Exposed via API and UI
Given an authorization with known transitions at times t0 (creation) and t1 (last transition) When GET /authorizations and GET /authorizations/{id} are called Then each response includes current_state, previous_state, days_in_state, and case_age_days And current_state equals the after_state of the latest event And previous_state equals the before_state of the latest event And days_in_state equals floor((now - t1)/86400) And case_age_days equals floor((now - t0)/86400) Given the authorization appears in the web list and detail views When the screens are rendered Then the displayed current state, previous state, days in state, and case age match the API values exactly
SLA Timer Pause/Resume During Info Requested
Given an authorization in Pending Review with an SLA due_at in the future When it transitions to Info Requested Then the SLA countdown pauses, paused_duration begins accumulating, and due_at is extended by the paused duration Given the authorization remains in Info Requested for 12 hours When it transitions to Resubmitted (or back to Pending Review) Then the SLA countdown resumes, due_at is extended by 12 hours, and days_in_state resets to 0 Given the system runs SLA breach detection When an authorization is in Info Requested Then it is excluded from breach alerts while paused
Derived Metrics Recalculation on Transition
Given any valid state transition is accepted When the transition event is committed Then current_state and previous_state are updated, days_in_state resets to 0, SLA remaining time is recalculated, and APIs reflect the new values within 5 seconds Given two transitions occur in quick succession (< 60 seconds) When derived fields are read Then case_age_days remains non-decreasing, days_in_state never becomes negative, and values are consistent with the latest event Given a transition fails validation When derived fields are read Then no derived metric changes are observed
Automatic Expiration via SLA Job
Given an authorization is in a non-terminal state and due_at has passed When the scheduled SLA job runs Then a transition to Expired is created automatically with source="system", an event is appended with timestamp, actor_id set to system, and current_state becomes Expired Given a user attempts to set Expired manually while due_at has not passed When the transition request is submitted Then it is rejected with HTTP 409 Conflict and no event is created
Payer SLA Templates & Rules
"As an operations manager, I want to define SLAs per payer so that countdowns and escalations reflect the rules we’re actually held to."
Description

Provide configurable SLA templates per payer/plan and request type (initial, extension), including calendar vs business day calculations, holiday calendars, state-specific rules, and anchor events (e.g., countdown from Submission or Acknowledgment). Support rule versioning with effective dates, audit history, and case-level overrides with reason codes. Preload a curated library of common PT payer SLAs to accelerate setup, and expose admin UI to edit and publish rules without code changes.

Acceptance Criteria
Payer/Plan Template: Calendar vs Business Days with Holiday Calendars
Given an admin creates an SLA template for Payer A, Plan X and Request Type "Initial" with duration 14 business days and assigns Holiday Calendar "US-Federal" When the template is saved and applied to an authorization submitted on 2025-07-01 Then the due date is calculated as 14 business days after 2025-07-01, excluding weekends and dates in "US-Federal", and the due date appears on the case Given the template's day type is changed to 14 calendar days When re-applied to the same case Then the due date updates to 14 calendar days after 2025-07-01 Given separate durations exist for Request Type "Extension" (7 business days) and "Initial" (14 business days) under the same payer/plan When applied to an Extension case submitted on 2025-07-01 Then the due date is 7 business days after 2025-07-01 Given the computed business-day due date falls on a holiday in "US-Federal" When calculated Then the due date rolls to the next business day
Anchor Event Selection: Submission vs Acknowledgment
Given the template anchor is "Submission" When a case has a Submission timestamp Then the SLA countdown starts from the Submission timestamp and the remaining time reflects the configured duration Given the anchor is changed to "Acknowledgment" When an Acknowledgment timestamp is recorded after submission Then the countdown starts from the Acknowledgment timestamp and an audit entry records user, timestamp, and old/new anchor Given no Acknowledgment timestamp exists and anchor is "Acknowledgment" When viewing the case Then the SLA panel shows "Pending anchor" and no countdown is displayed
State-Specific Rule Application
Given a state-specific rule exists for state "CA" and a default rule for all other states When a case with state context "CA" is created Then the CA rule is applied and the case displays the applied state Given a case with state context "NV" is created When rules are evaluated Then the default rule is applied Given both a state-specific and default rule could match When selecting the rule Then the system deterministically prefers the state-specific rule and shows the selection rationale
Rule Versioning with Effective Dates and Historical Evaluation
Given rule v1 effective 2025-01-01 to 2025-09-30 and rule v2 effective 2025-10-01 onward for Payer A/Plan X When a case is submitted on 2025-09-28 Then v1 is applied When a case is submitted on 2025-10-05 Then v2 is applied Given a case that used v1 exists When v2 is published Then the case retains v1 and the case shows applied version, effective range, and publish timestamp Given an auditor opens rule history When viewing the audit Then create/edit/publish entries show user, timestamp, and field-level changes
Case-Level SLA Override with Reason Codes and Audit
Given an active case with template-calculated due date When a coordinator overrides the due date or assigns a different template Then the system requires selecting a reason code and entering a note and records user, timestamp, old value, and new value in the audit Given an override is active When viewing the case Then the SLA badge displays "Overridden" with the reason code and who/when Given an override exists When the coordinator clicks "Revert to template" Then the original template-calculated values are restored and the revert is audited
Preloaded Library: Import, Edit, and Publish
Given a curated library of common PT payer SLAs is available When an admin searches "Aetna PT Initial" and imports it Then a Draft template is created with default parameters (duration, day type, anchor, holidays, state variants, request types) and source "Library" Given the draft is edited When the admin changes duration to 10 business days and saves Then validation runs and no errors are shown Given the draft passes validation When the admin publishes it Then the template becomes available for assignment and the audit shows source "Library" and editor details
Admin UI: Validate, Publish, and No-Code Changes
Given an admin creates or edits a draft rule When they click Validate Then publishing is blocked if required fields are missing (payer/plan, request type, day type, duration, anchor, holiday calendar) and field-level errors are displayed Given a rule publishes successfully When it is assigned to a payer/plan Then new cases created after publish immediately use the rule without code changes or restarts Given two rules overlap for the same payer/plan/request type and state with overlapping effective dates When publishing either rule Then the system prevents publish and explains the conflict window
SLA Countdown & Breach Alerts
"As a coordinator, I want visible timers and alerts for upcoming SLA deadlines so that I can prioritize work and prevent breaches."
Description

Compute real-time SLA countdowns based on configured rules and lifecycle timestamps, with color-coded timers (green/amber/red) displayed on lists and detail pages. Adjust for business-day calendars, clinic/payer time zones, and paused intervals during Info Requested. Emit pre-breach and breach events for notification pipelines and reporting. Surface breach reasons and elapsed time post-deadline, and provide quick actions to prioritize at-risk cases.

Acceptance Criteria
Real-time Countdown and Color Coding on Lists and Details
Given an authorization with an SLA rule of 5 business days from Submission timestamp in the configured timezone When the current time is within the SLA window Then the remaining time is computed in business time and updates at least every 60 seconds And the timer is green when remaining time > 24 hours, amber when remaining time is between 4 and 24 hours inclusive, and red when remaining time <= 4 hours (default thresholds) And the same value and color are displayed consistently on both List and Detail pages And when the SLA rule configuration is changed, the countdown and color reflect the new rule within 60 seconds
Business-Day Calendar and Time Zone Adjustments
Given the business-day calendar defines working hours and holidays and the SLA timezone is set to the payer's timezone When an authorization is submitted on a Friday 16:30 local time with a 16 business-hour SLA Then the countdown excludes non-business hours and weekend, continuing on the next business day, and computes a due time of Monday 16:30 local time And when the SLA timezone configuration is switched to the clinic timezone, the due time recomputes accordingly within 60 seconds And dates marked as holidays in the selected calendar are excluded from SLA consumption
Pause Countdown During Info Requested
Given an authorization moves to Info Requested at timestamp T1 and returns to in-progress at timestamp T2 When viewing the SLA timer Then time between T1 and T2 is excluded from the SLA countdown (no decrement during pause) And the UI shows a Paused state with a neutral indicator during Info Requested And multiple Info Requested intervals are cumulatively excluded And pause/resume timestamps are recorded in the audit timeline
Pre-breach and Breach Events Emission
Given a pre-breach threshold of 8 business hours before due When remaining SLA reaches exactly 8 business hours Then a PreBreach event is emitted exactly once with payload fields: eventId, authorizationId, caseId, payerId, clinicId, slaRuleId, dueAt, secondsRemaining, severity And when the due time is crossed Then a Breach event is emitted exactly once with payload fields: eventId, authorizationId, caseId, payerId, clinicId, slaRuleId, dueAt, secondsOverdue, breachReason, severity And duplicate events are not emitted on refresh or repeated evaluations (idempotency by eventId) And events are published to the notifications topic and available to the reporting store within 2 minutes
Post-deadline Breach Reason and Overdue Display
Given an authorization is past its SLA due time When viewing the List or Detail page Then the timer displays Overdue with elapsed time since due in HH:MM format And a breachReason code and human-readable label are displayed from the configured reason mapping And the displayed breachReason matches the breachReason in the emitted Breach event payload And selecting the breachReason reveals timeline segments that contributed to the breach
Quick Actions for At-Risk Cases
Given a user with Coordinator role is viewing the authorization list When an item is amber or red Then quick actions Assign to Me, Escalate, and Add Note are visible and enabled And quick actions are not shown for green items And invoking an action logs an activity entry, updates priority for Escalate, and confirms within 1 second And bulk selection allows Assign to Me and Escalate on multiple amber/red items with an activity entry per item
Evidence Request Handling & Due Dates
"As a physical therapist, I want to attach the exact documents a payer requested and pull in adherence data automatically so that I can respond quickly and completely."
Description

Log payer requests for additional information with request date, due date, requestor details, and a checklist of requested items (e.g., progress note, HEP adherence, imaging). Enable secure upload/attachment with item-level tagging, versioning, virus scanning, and file-type/size validation. Pull MoveMate exercise adherence summaries and PT notes directly into the case to satisfy requests quickly. Track completion status per item, capture resubmission timestamp, and update the lifecycle accordingly.

Acceptance Criteria
Create and Validate Evidence Request Record
- Given an authorization case is open, When a coordinator logs a new evidence request by entering request date/time, due date/time, requestor name, organization, and contact method, Then the system requires all fields and prevents save until completed. - Given a request date/time is provided, When a due date/time is entered, Then the due date must be greater than the request date and not in the past relative to the clinic timezone. - Then all stored timestamps are persisted in UTC and displayed in the clinic timezone with offset. - Then an SLA countdown to the due date is displayed (days/hours), with color states: >72h green, 24–72h amber, <24h red, and overdue red with an "Overdue" label. - Then a timeline entry "Evidence request logged" is created with a unique ID containing request date, due date, requestor details, and author.
Manage Requested Items Checklist
- Given an evidence request is active, When the coordinator adds checklist items from predefined options (Progress Note, HEP Adherence Summary, Imaging, Other) or custom labels, Then each item is created with fields: label, required (yes/no), status (Not Started default), and payer instructions (optional). - Then permitted status transitions are: Not Started -> In Progress -> Submitted; Submitted -> Accepted or Rejected; Rejected -> In Progress or Submitted. - Then the request-level roll-up displays percent of required items in Accepted status and counts by status. - Then the system prevents marking the request as complete until all required items are in Accepted status. - Then every status change records user, old/new status, and timestamp in the audit log.
Secure Upload with Validation and Virus Scanning
- Given a checklist item is selected, When a user uploads a file, Then only files with extensions and MIME types in [pdf, doc, docx, jpg, jpeg, png, tif, tiff, txt] and size <= 25 MB are accepted; otherwise the upload is blocked with an error message. - Then each file is scanned for malware before it becomes available; If the scan fails or detects a threat, the file is discarded, a descriptive error is shown, and an audit event is logged. - Then successful uploads are stored with item-level tags, SHA-256 checksum, uploader, and UTC timestamp, and a preview is generated for PDFs and images. - Then all transfers use TLS 1.2+ and files are encrypted at rest; access requires authorization to the case.
Item-Level Versioning and Audit Trail
- Given an item has an existing attachment, When a new file is uploaded for the same item, Then the system creates a new version (vN), sets it as current, and preserves all prior versions read-only. - Then version metadata includes version number, filename, uploader, change note (optional), and UTC timestamp, and is visible in a version history panel. - Then a user with appropriate permissions can promote a prior version to current; the action is recorded in the audit log. - Then delete actions are soft deletes requiring a reason, are logged with user and timestamp, and do not remove audit history.
Attach MoveMate Adherence Summary and PT Notes
- Given a checklist contains "HEP Adherence Summary" and/or "Progress Note", When the coordinator selects "Attach from MoveMate", Then the system fetches the latest adherence summary (default last 30 days, adjustable) and the latest signed PT note and generates PDFs within 60 seconds. - Then the generated documents are auto-tagged to their respective items, marked as Submitted, and include source metadata (patient, date range, generated-at UTC) in the timeline. - If the fetch fails or returns no data, Then no item status is advanced, an actionable error is shown with a retry option, and the failure is logged.
Resubmission, Lifecycle Update, and Due Date Alerts
- Given all required checklist items are in Submitted or Accepted status, When the coordinator clicks "Submit to Payer", Then the system records a resubmission timestamp (UTC), creates an "Evidence resubmitted" timeline entry, and updates the authorization lifecycle state to "Under Review". - Then the SLA countdown stops and displays "Submitted on {timestamp}" and the due date badge changes to "Awaiting payer decision". - Then notifications are sent to assigned coordinators when: a request is logged (immediately), 72 hours before due date, 24 hours before due date, and at due time; if overdue without resubmission, the case is flagged Overdue and appears in the overdue queue and daily digest. - Then after resubmission, items are locked from edits except when a payer marks an item Rejected; reopening an item requires a reason and logs an audit event.
Coordinator Notifications & Escalations
"As a lead coordinator, I want timely, targeted notifications and escalations so that nothing falls through the cracks when SLAs are at risk."
Description

Deliver configurable notifications for key events (pre-breach, breach, info requested, approval/denial, pending reassignment) via in-app and email channels. Support per-user preferences, batching/digests, quiet hours, and assignment-based routing (owner, backup, watchers). Implement escalation rules to notify team leads or admins after thresholds are exceeded and reassign ownership when out-of-office is active. Provide message templates with merge fields and deep links to the authorization.

Acceptance Criteria
Pre-Breach SLA Alerts Routed by Assignment and Preferences
Given an authorization has an SLA dueAt timestamp and a pre-breach threshold of 24 hours is configured And the authorization has an owner, backup, and watchers assigned And each recipient has channel preferences configured (in-app and/or email) When the current time reaches dueAt minus 24 hours Then the system emits exactly one pre-breach notification event for this authorization and threshold within 5 minutes And delivers notifications only via the channels each recipient has enabled And the notification content is rendered from the Pre-Breach template with populated merge fields: {auth_id}, {patient_name}, {payer}, {due_at}, {sla_hours_remaining}, {owner_name}, and includes a deep link to the authorization detail And the event is recorded on the authorization timeline with an event ID and timestamp And duplicate notifications for the same threshold are suppressed for 7 days
SLA Breach Escalation to Team Lead/Admin
Given an authorization remains open past its SLA dueAt And an escalation policy is configured to notify the team lead at 1 hour past breach and admins at 24 hours past breach When the current time is 1 hour past dueAt Then the team lead receives a breach escalation notification via their enabled channels within 5 minutes, rendered with the Breach template including {auth_id}, {patient_name}, {payer}, {due_at}, {breach_age_hours}, and a deep link And the escalation is logged on the authorization timeline with an event ID and timestamp When the current time is 24 hours past dueAt and the authorization is still not closed Then the admin group receives an escalation notification via their enabled channels within 5 minutes And each escalation threshold triggers at most once per authorization
Evidence/Info Requested Notification with Deep Link
Given the payer status for an authorization changes to Info Requested and includes a request_id and due_date for evidence When the update is received by the system Then the owner, backup, and watchers receive notifications via their enabled channels within 2 minutes And the notification is rendered from the Info Requested template with populated merge fields: {auth_id}, {patient_name}, {payer}, {request_id}, {evidence_due_date}, {owner_name}, and includes a deep link directly to the document upload section for that authorization And the notification event is logged on the authorization timeline with an event ID and timestamp And repeated identical Info Requested events (same request_id) do not generate duplicate notifications
Approval/Denial Decision Notification
Given the payer decision for an authorization updates to Approved or Denied and includes a decision_id and decision_timestamp When the update is received by the system Then the owner, backup, and watchers receive notifications via their enabled channels within 2 minutes And the notification is rendered from the Decision template with populated merge fields: {auth_id}, {patient_name}, {payer}, {decision}, {decision_id}, {decision_timestamp}, {effective_dates_or_denial_reason}, and includes a deep link to the authorization summary And a timeline entry is recorded with event ID, timestamp, and decision And the same decision_id does not trigger duplicate notifications
Quiet Hours and Digest Delivery
Given a user has Quiet Hours configured (e.g., 20:00–07:00 local time) and a Daily Digest time of 07:15 And notifications targeting that user are generated during Quiet Hours When within the Quiet Hours window Then real-time notifications to that user are suppressed across all channels And the suppressed notifications are queued for digest When the Digest time is reached Then the user receives a single digest notification summarizing queued items grouped by authorization with counts, each item including a deep link And the digest is delivered only via the user’s enabled digest channels (email and/or in-app) And if no items are queued, no digest is sent And notifications generated outside Quiet Hours are delivered immediately per channel preferences
Pending Reassignment Notification
Given an authorization’s status is set to Pending Reassignment with a specified reason and target team or pool When the status changes to Pending Reassignment Then the current owner, backup, watchers, and team lead are notified via their enabled channels within 5 minutes And the notification is rendered from the Reassignment template with populated merge fields: {auth_id}, {patient_name}, {payer}, {reason}, {target_team}, {reassign_deadline}, and includes a deep link to the reassignment screen And a timeline event is recorded for Pending Reassignment with event ID and timestamp When the authorization is reassigned to a new owner Then both the prior and new owners receive confirmation notifications and the timeline reflects the ownership change And duplicate notifications for the same Pending Reassignment event are suppressed
Out-of-Office Auto-Reassignment and Routing
Given the current owner of an authorization has an active Out-of-Office window with a delegate specified and auto-reassign enabled When any new authorization event occurs or an SLA threshold triggers during the owner’s Out-of-Office period Then ownership is reassigned to the delegate immediately and recorded on the authorization timeline with actor System and reason Out-of-Office And all notifications for the triggering event are routed to the new owner, backup, and watchers based on their channel preferences And the original owner does not receive real-time notifications during the Out-of-Office period and is added as a watcher if not already And ownership remains with the delegate after Out-of-Office ends until changed by a user
Shareable Timeline & Export
"As a billing specialist, I want a shareable authorization timeline so that I can provide payers and auditors a clear record without assembling it manually."
Description

Render a clean, chronological timeline of all authorization events, state changes, and communications with timestamps, actors, and sources. Allow export to PDF and creation of a secure, expiring share link with permission-scoped redaction of PHI as needed. Include a concise header with key identifiers, current status, SLA outcome, and an attachments index to speed payer status checks and internal audits. Ensure outputs are print-friendly and accessible.

Acceptance Criteria
Chronological Timeline Rendering
Given an authorization with events from submission to closure across multiple sources When the timeline is loaded Then events are ordered by event.timestamp ascending and secondarily by event.createdAt then event.id for ties And each event displays timestamp with timezone, actor name and role, source (system/user/integration), event type, and a concise description And the timeline shows total event count and Last Updated time And for an authorization with up to 500 events, initial render completes within 2 seconds on a typical clinic network (≥10 Mbps, ≤200 ms RTT) And loading the next 100 events via pagination or infinite scroll completes within 1 second
Header Summary Completeness & SLA Outcome
Given an authorization record is opened Then the header displays Authorization ID, Payer, Patient Identifier (per permission scope), Current Status, Submitted At, Decision/Closed At (if any), SLA threshold, and SLA outcome labeled Within SLA or Breached with breach timestamp And SLA outcome equals (ClosedAt or now) − SubmittedAt compared to the configured SLA business days/hours and matches the dashboard value And the identical header appears in PDF export and in the share view
Attachments Index with Deep Links
Given the authorization has attachments Then an attachments index lists each file with name, type, size, uploaded by, uploaded at (with timezone), and the associated timeline event And selecting an index row navigates to the corresponding event in the timeline And the PDF export includes the attachments index with page numbers where referenced events appear And attachments hidden by permission scope are omitted from the index in both export and share views
PDF Export: Content Parity, Accessibility, and Print-Friendliness
Given a user exports the authorization to PDF When the export completes Then the PDF contains the header, full timeline, and attachments index in the same order and with the same content as the on-screen view (subject to permission-scoped redaction) And the PDF is tagged and passes PDF/UA-1 and WCAG 2.1 AA reading-order checks, with selectable text and proper headings And page headers/footers include Authorization ID and page X of Y; margins ≥ 0.5 in; no event rows are split across pages; all timestamps include timezone And file size is ≤ 10 MB for up to 500 timeline events and 50 attachment rows (excluding embedded attachment binaries)
Secure Expiring Share Link
Given a coordinator generates a share link with a selected permission scope and expiration between 24 hours and 30 days When the link is created Then a non-guessable, signed URL bound to the authorization and scope is generated And accessing the link after expiration or after manual revocation returns HTTP 403 without revealing resource existence And all accesses and revocations are logged with timestamp, redacted IP, and user agent And the share view renders the header, timeline, and attachments index with redactions applied per scope
Permission-Scoped PHI Redaction
Given scope Payer Review is selected for a share link or export Then patient full name is reduced to initials, DOB to year, member/MRN masked to last 4, contact details (phone, email, address) removed, internal clinician notes removed, and attachments tagged Internal excluded Given scope Internal Audit is selected Then no PHI redaction is applied and all content is included except items flagged Do Not Share And a Preview Shared View is available before generation, showing the exact output And automated checks block generation if unredacted PHI patterns remain in a Payer Review output
Accessibility and Print-Friendliness of Share View
Given a recipient opens a share link on desktop or mobile Then the page supports keyboard-only navigation, visible focus indicators, ARIA landmarks, semantic headings, and 4.5:1 color contrast (WCAG 2.1 AA) And invoking the browser's Print action yields a clean printout with proper page breaks, intact rows, and the same header; interactive controls are not printed

Variance Lens

Presents adherence and form‑quality against diagnosis‑matched benchmarks with simple green/amber callouts. Adds plain‑language insights (e.g., “top‑quartile adherence by week 4”) so reviewers grasp progress at a glance—accelerating approvals and reducing clarification calls.

Requirements

Diagnosis-Matched Benchmarking Engine
"As a clinician reviewer, I want patient metrics compared to diagnosis-appropriate cohorts so that I can judge progress fairly and avoid misleading comparisons."
Description

Implements a rules- and data-driven engine that maps patient diagnosis, condition codes, protocol stage, and demographics to appropriate adherence and form-quality benchmarks. Supports clinician-configurable cohort filters (age range, severity, surgical vs. conservative care) and time-window normalization (e.g., week-by-week). Sources baseline benchmarks from curated datasets with clinic-level overrides. Exposes a versioned service to serve benchmarks to dashboards, alerts, and reports, ensuring consistent calculations across the product.

Acceptance Criteria
Diagnosis-to-Benchmark Mapping
Given a patient diagnosisCode='M75.12', protocolStage='Phase II', demographics={age:54, sex:'F'}, carePath='conservative' When the engine is requested with these parameters Then it returns adherenceBenchmarks and formQualityBenchmarks matched to the exact diagnosis and stage And response.metadata.appliedMapping.level='exact_diagnosis_stage' And response.metadata.appliedMapping.keys includes ['diagnosisCode','protocolStage'] And all benchmark metrics include fields ['metricId','unit','quantiles'] with quantiles containing p25, p50, p75
Cohort Filters and Clinic Overrides
Given filters {ageRange:[40,60], severity:'moderate', carePath:'conservative'} and clinicId='CLIN123' and a matching clinic-level override exists When the engine is requested with these parameters Then response.metadata.sourceDataset='clinic_override' And response.metadata.sourceDatasetId is not empty And response.metadata.cohort.filters matches the requested filters Given the same request but no clinic-level override exists When the engine is requested Then response.metadata.sourceDataset='baseline_curated' And response.metadata.sourceDatasetId is not empty
Time-Window Normalization by Protocol Week
Given protocolStart='2025-01-03T10:00:00-05:00' (tz='America/New_York') and weekIndex=4 When the engine computes week normalization Then response.metadata.window.weekIndex=4 And response.metadata.window.startUtc='2025-01-24T15:00:00Z' And response.metadata.window.endUtc='2025-01-31T15:00:00Z' And normalization uses fixed 7-day windows anchored to protocolStart (days 0–6 = week 1)
Versioned API Contract
Given request header 'Accept-Version: v1' When calling GET /benchmarks Then response.status=200 And response.body.apiVersion='v1' And response.body contains keys ['benchmarks','metadata'] And response body validates against JSON schema 'benchmark.v1.json' Given request header 'Accept-Version: v999' When calling GET /benchmarks Then response.status=400 And response.body.error.code='unsupported_version'
Quantile Outputs and Threshold Recommendations
Given cohort parameters for diagnosisCode='M75.12' and weekIndex=4 When requesting benchmarks for metrics ['adherence_rate','form_quality_score'] Then each metric includes quantiles.p25, quantiles.p50, quantiles.p75 as decimals within [0,1] And response.recommendations.thresholds.green >= quantiles.p75 And response.recommendations.thresholds.amber >= quantiles.p50 and < quantiles.p75 And response.metadata.sampleSize >= 30 And response.metadata.benchmarkDateRange includes startUtc and endUtc in ISO 8601
Sparse Cohort Fallback Ladder
Given requested filters that produce response.metadata.cohort.sampleSize=12 (<30) When the engine generates benchmarks Then response.metadata.fallback.applied=true And response.metadata.fallback.path=['diagnosis_stage_filters','diagnosis_stage','condition_category_stage','global_stage'] And the returned cohort after fallback has sampleSize >= 30 or response.metadata.fallback.finalLevel='global_stage' And response.metadata.warnings contains {code:'SPARSE_COHORT', minSamples:30} And quantiles are omitted when sampleSize < 15 (only central tendency is returned)
Performance and Reliability SLOs
Given a production-like dataset (>=100k patients) and typical query parameters When load-tested at 50 RPS sustained for 10 minutes Then p95 latency <= 300 ms and p99 latency <= 800 ms And 5xx error rate <= 0.1% And availability over a rolling 30 days >= 99.9% And rate limits enforce 100 RPS per clientId with HTTP 429 and a valid Retry-After header when exceeded
Variance Scoring & Traffic-Light Callouts
"As a physical therapist, I want clear traffic-light indicators of how a patient compares to peers so that I can triage attention quickly."
Description

Calculates variance between actual adherence/form-quality metrics and selected benchmarks, applying statistically sound thresholds, minimum sample sizes, and smoothing rules. Renders clear green/amber (and red for severe underperformance) indicators with color-blind-safe patterns and tooltips showing numeric deltas and confidence ranges. Provides APIs and UI components to surface callouts across patient cards and the Variance Lens view with near real-time updates as new data arrives.

Acceptance Criteria
Variance Computation Against Benchmark
Given a patient with adherence (%) and form-quality (%) metrics and a selected diagnosis-matched benchmark And adherence sample_size_sessions >= 3 and form_quality_sample_size_reps >= 30 When the system computes variance Then it calculates delta_pp = actual - benchmark for each metric And it computes a 95% confidence interval for each delta and exposes ci_low and ci_high And it applies exponential smoothing (alpha = 0.3) to actuals before variance And it persists and exposes fields: metric, delta_pp, ci_low, ci_high, sample_size, smoothed_actual, benchmark_id, computed_at
Traffic-Light Classification and Tooltip Content
Given computed variance results with delta_pp, ci_low, and ci_high When classifying callouts Then set green if ci_low >= 0 And set amber if ci_low < 0 and ci_high > 0 and delta_pp >= -5 And set red if ci_high <= 0 and delta_pp <= -5 And render color-blind-safe patterns: green=solid, amber=diagonal stripe, red=dotted And ensure non-text contrast for indicators and pattern boundaries >= 3:1 against background And on hover or keyboard focus, show a tooltip with actual value, benchmark value, delta_pp (pp), 95% CI, sample_size, benchmark label, and last_updated timestamp
Variance Callouts API Availability and Freshness
Given a valid auth token and patient_id When the client requests the variance callouts API for metrics adherence and form_quality Then the API responds 200 with JSON fields: metric, traffic_light, delta_pp, ci_low, ci_high, sample_size, benchmark_id, benchmark_label, classification_reason, last_updated And responds 401 for missing or invalid auth and 404 for unknown patient_id And 95th percentile latency <= 300 ms under 50 requests per second And the API reflects new ingested session data within 30 seconds of arrival
UI Consistency Across Patient Card and Variance Lens
Given a patient with computed variance callouts available via the API When viewing the patient card and the Variance Lens view Then both surfaces display identical traffic_light states for each metric And displayed numeric delta_pp and CI values match within 0.1 percentage points And both surfaces provide the same tooltip content including benchmark_label and last_updated And indicators are reachable via keyboard (Tab) and tooltips open on Enter/Space and close on Escape
Benchmark Selection, Mapping, and Fallback
Given a diagnosis with an associated benchmark mapping When a reviewer changes the selected benchmark for a patient Then the system recomputes variance and updates classifications in the UI within 2 seconds And the API and tooltips reflect the new benchmark_id and benchmark_label And if no diagnosis-specific benchmark exists, the system uses the clinic default benchmark and labels it as default And the selection persists for the patient context and is included in the audit log with user, timestamp, and prior benchmark
Hysteresis and Flicker Control for Callouts
Given variance values that oscillate near a classification boundary When successive recomputations occur Then the traffic_light state changes only if the new classification persists for two consecutive recomputations at least 2 minutes apart And severe underperformance (red) updates immediately without hysteresis And each state change is logged with from_state, to_state, reason, and timestamp
Insufficient Data Handling and Messaging
Given adherence sample_size_sessions < 3 or form_quality_sample_size_reps < 30 When computing variance and classification Then the system returns traffic_light = grey with a neutral hashed pattern And the tooltip and API include missing_counts (sessions and/or reps) and a message explaining the minimum required And the UI auto-updates to a classified state within 30 seconds after thresholds are met without requiring a page refresh
Plain-Language Insight Generation
"As an insurance reviewer, I want plain-language summaries of adherence versus benchmarks so that I can make faster approval decisions with fewer clarification calls."
Description

Generates concise, human-readable insights from variance and trend data (e.g., “Top-quartile adherence by week 4; form quality improving 12% week-over-week”). Uses templated natural-language generation with clinical guardrails, localization support, and jargon avoidance. Integrates insights into the Variance Lens panel, notifications, and reports with links to supporting charts for verification.

Acceptance Criteria
Insight Accuracy vs Diagnosis Benchmarks
Given the patient’s diagnosis is mapped to a benchmark cohort and weekly adherence/form-quality data exist for the selected window When the system generates plain-language insights Then any quartile claim (e.g., "top quartile") corresponds to percentile >= 75 for that cohort and window And any "bottom quartile" claim corresponds to percentile < 25 And any change metric (e.g., "improving 12% week-over-week") matches the computed trend over the last 4 weeks, rounded to nearest whole percent, absolute error <= 1% And all numeric values match the values displayed in the supporting chart for identical filters and timeframe And insights refresh within 5 seconds of underlying data updates
Plain-Language and Jargon Avoidance
Given the insight text is rendered in the Variance Lens panel and reports When evaluated for readability and terminology Then Flesch–Kincaid Grade Level <= 8.0 And the text contains no clinical jargon or prohibited terms (e.g., ICD codes, "contraindicated", "proprioception"), no unexplained acronyms (except %), and no internal codes And sentence length <= 160 characters in panel; <= 200 characters in reports And no unresolved template tokens remain (e.g., {metric}, {value}) And punctuation and capitalization follow the approved style guide
Localization and Locale Formatting
Given the user locale is set to en-US or es-ES When the system generates an insight Then the text appears in the selected language using the approved translation template And numbers, dates, and percent signs are formatted per locale (en-US: 12.5%; 10/07/2025; es-ES: 12,5 %; 07/10/2025) And rounding behavior is consistent across locales (nearest whole percent unless otherwise specified) And if a translation key is missing, the system falls back to English and logs a warn-level telemetry event with the missing key ID And localized insights are semantically equivalent to the source per localization QA checklist
Variance Lens Panel Integration and Chart Links
Given a reviewer opens a patient’s Variance Lens panel When insights are displayed Then the latest 1–2 insights render above the benchmark callouts and align with each metric’s green/amber state And each insight includes a "View chart" link that opens the correct chart prefiltered by patient, diagnosis, metric, and referenced time window And the chart link opens with a single click and the chart loads within 2 seconds at p95 And if the referenced metric is unavailable, the link is disabled and an explanatory tooltip is shown And link target correctness can be validated via URL/query parameters matching the insight’s tokens
Notifications and Reports Embedding
Given an insight crosses a defined notification threshold When a push/email notification is sent Then the notification body contains the same insight text truncated at a word boundary to 120 characters with an ellipsis And the notification includes a deep link to the patient’s Variance Lens panel And in PDF/HTML reports, up to 3 most recent insights appear under the Variance Lens section with a footnote link to the supporting chart And embedded insights in notifications/reports match the panel text byte-for-byte aside from permitted truncation
Data Sufficiency and Confidence Fallbacks
Given fewer than 2 complete weeks or fewer than 3 sessions of data are available for a metric When the system attempts to generate an insight Then it outputs a localized "Not enough data to assess trend" message and makes no quartile claims And if median CV confidence for form-quality < 0.70 in the window, form-quality insights are suppressed and a neutral message is shown And a machine-readable reason code (e.g., INSUFFICIENT_DATA, LOW_CONFIDENCE) is returned in the API And suppressed insights do not trigger notifications
Clinical Guardrails and Safe Language
Given insights are presented to clinicians and patients When content validation runs Then insights describe adherence/form-quality relative to benchmarks without making diagnoses, treatment recommendations, or outcome claims And comparative phrasing is cohort-based (e.g., "compared to diagnosis-matched peers") and includes a benchmark source label And a disclaimer "For informational purposes; not medical advice" appears in reports and is accessible via tooltip in the panel And risk-heavy terms (e.g., "red flag") are avoided in favor of neutral terms (e.g., "below benchmark") And insights contain no PHI (names, MRNs, free-text notes) concatenated into the message
Reviewer Snapshot Widget & Drill-down
"As a clinic administrator, I want a quick snapshot with drill-down so that I can monitor many patients and investigate outliers efficiently."
Description

Adds a compact Variance Lens widget to patient and cohort dashboards showing key benchmarks, traffic-light status, and top insights, with one-click drill-down to detailed variance breakdowns by week and metric. Supports responsive layouts for mobile and web and enforces role-based visibility (clinician, payer reviewer, patient-limited view).

Acceptance Criteria
Patient Dashboard Widget: Benchmarks, Traffic-Light Status, Top Insights
Given a clinician views a patient’s dashboard with at least 2 completed sessions and a mapped diagnosis When the Variance Lens widget renders Then it displays adherence (%) and form-quality score with diagnosis-matched benchmarks for the same time window And shows a single traffic-light status for each metric computed from configured thresholds (default: Green ≥ target; Amber within 10% below target; Red >10% below target) And shows up to 2 plain-language insights derived from the latest 4 weeks (e.g., “Top‑quartile adherence by week 4”), each ≤140 characters And shows a Last Updated timestamp in the user’s locale/time zone
One-Click Drill-Down to Weekly Variance by Metric
Given a reviewer clicks the widget’s View details control or a metric chip When the drill-down opens Then it loads within ≤2,000 ms at P95 on a mid-tier mobile over 4G (cold cache) And shows weekly variance for the selected metric for the last up to 12 weeks (or available history), with benchmark line, entity line, and per-week status markers And includes filters for metric and date range pre-populated from the widget context And provides a Back/Close control that returns to the originating dashboard without losing scroll position And the URL/deeplink encodes entity, metric, and week range for shareable repro
Role-Based Visibility and Redaction
Given a signed-in user with role Clinician, Payer Reviewer, or Patient When viewing the widget or drill-down Then Clinician sees patient-level details, cohort comparators, and all insights And Payer Reviewer sees patient-level metrics with cohort comparator, but no PHI beyond FirstName + LastInitial and MRN; DOB and contact info are hidden And Patient sees only their own metrics and benchmarks with non-insurance terminology; payer-only insights and cohort comparators are hidden And unauthorized roles or cross-tenant access receives 403 and no sensitive fields are sent over the wire And all access is enforced server-side and logged with userId, role, entityId, and timestamp
Responsive Layout for Mobile and Web
Given the app runs on varying viewports and orientations When rendering the widget Then ≥1024px width: two-column layout with both metrics visible side-by-side and insights to the right And 600–1023px: stacked metrics with insights below And <600px: single-column with collapsible sections and horizontal scroll disabled inside charts And all interactive targets are ≥44×44 dp with 8 dp spacing; text respects system font scaling up to 200% without truncation And orientation changes reflow content within 300 ms without visual glitches
Cohort Dashboard Aggregation and Filters
Given a reviewer opens a cohort dashboard with active filters (e.g., diagnosis, date range) When the Variance Lens widget renders Then adherence and form-quality are aggregated as weekly medians across the filtered cohort and compared to diagnosis-matched external benchmarks And the cohort N and active filter chips are displayed And changing filters updates the widget and associated drill-down within 500 ms after data returns And exported drill-down data respects the same filters and shows units, week labels, and benchmark source
Data Freshness, Empty States, and Error Handling
Given new exercise data is ingested from a patient device When backend processing completes Then updated metrics and statuses appear in the widget within ≤60 minutes, with an accurate Last Updated timestamp And if a metric lacks sufficient data, the widget shows an informative empty state (e.g., “No data yet for weeks 1–4”) and no traffic-light color is displayed And transient errors show a retry action and preserve prior values; persistent errors surface a generic message without sensitive details and are logged to monitoring with correlation ID
Accessibility and Plain-Language Insights Quality
Given users relying on assistive technologies When interacting with the widget and drill-down Then status colors include text labels and icons; color contrast meets WCAG 2.1 AA (≥4.5:1 for text, ≥3:1 for large text/icons) And all controls are keyboard operable with visible focus and logical tab order; charts expose ARIA labels announcing metric, week, value, benchmark, and status And insights are written at ≤8th-grade reading level, ≤140 characters each, avoiding jargon; locale-aware number/date formats are applied (en-US, en-GB) And screen reader announcements are free of redundancy and occur on state changes (e.g., filter applied, drill-down opened)
Exportable Variance Report
"As a physical therapist, I want to export a concise variance report so that I can attach it to authorization requests and progress notes without manual formatting."
Description

Provides a shareable, payer-ready report (PDF and secure link) summarizing adherence, form quality, benchmark definitions, and plain-language insights with timestamps and clinician sign-off. Implements access controls, watermarking, and export to FHIR DocumentReference for EHR integration, with clinic branding and configurable sections.

Acceptance Criteria
PDF Export with Branding and Configurable Sections
Given a clinician selects clinic branding and toggles report sections When the clinician exports the Variance Report to PDF Then the PDF includes only the selected sections in the specified order And the clinic logo appears in the header at a minimum width of 120px without pixelation And primary accent color is applied to headings and callouts And the document generates within 8 seconds at the 95th percentile for a report up to 20 pages And the resulting file size is ≤ 5 MB for a report up to 20 pages at 300 DPI And if no logo is configured, the app falls back to a neutral header without broken image artifacts
Required Content and Plain-Language Insights
Given a patient with at least 4 weeks of tracked exercises and form analysis When the Variance Report is generated Then it contains: adherence summary (sessions completed, completion rate by week), form quality (correct rep %, top 3 error types), benchmark definitions (cohort, quartile thresholds, source date range), plain-language insights (at least 3 statements such as “top‑quartile adherence by week 4”), timestamps for data window and last sync And green/amber callouts reflect adherence and form-quality variance versus diagnosis-matched benchmarks And all numeric values match the Variance Lens dashboard values within ±0.1% And sections include footnotes linking insights to benchmark definitions
Clinician Review and Sign‑Off Workflow
Given a clinician with sign privileges previews the report When the clinician applies e-signature and confirms sign-off Then the report header shows clinician name, credentials, NPI, and sign-off timestamp with timezone And the watermark changes from “DRAFT” to “FINAL” upon sign-off And the signed report becomes immutable (content hash changes with any edit and version increments) And any subsequent change requires a new version and new sign-off And the audit log records user id, action, timestamp, report id, previous version id
Secure Shareable Link and Access Controls
Given a finalized report When a shareable link is created Then a unique tokenized HTTPS URL is generated with default expiry of 14 days, configurable between 1 and 90 days And an optional password can be set; viewing requires the correct password if set And after 5 consecutive failed password attempts the link is locked for 15 minutes And revoking the link immediately renders it inaccessible (HTTP 403) and invalidates the token And after expiry the link returns HTTP 410 And all access events (timestamp, IP, user agent, success/failure) are logged; no passwords are stored in logs
Watermarking Behavior for Draft and Final States
Given a report in draft status When the report is rendered Then each page displays a diagonal watermark “DRAFT – Not for distribution” at 10–15% opacity And after sign-off, each page displays a footer watermark “CONFIDENTIAL – Payer Review” with patient initials and report id And watermarking is present on all pages including appendices and cannot be disabled for final reports And attempts to remove watermarks produce a different content hash from the signed version
FHIR DocumentReference Export to EHR
Given an EHR destination is configured with valid credentials When the clinician exports the finalized report to the EHR Then a FHIR R4 DocumentReference is created with status=current, category and type coded per configuration, subject=Patient/{id}, author=Practitioner/{id}, date=sign-off timestamp And content.attachment includes the PDF with contentType=application/pdf, size in bytes, SHA-256 hash, and title And the server responds HTTP 201 with the created resource id And the resource passes FHIR R4 validation with zero errors (and US Core DocumentReference validation when that profile is enabled) And transient 5xx responses are retried up to 3 times with exponential backoff and are surfaced as failures if still unsuccessful
Data Privacy and Redaction Controls
Given a clinician enables redaction options prior to export When the report is generated Then selected PII fields (e.g., phone number, email, DOB) are omitted from the PDF body and metadata And a “Fields Redacted” appendix lists each omitted field and the reason And no redacted values are present in PDF metadata, embedded files, or hidden layers (verified by automated scan) And the footer notes “Minimum necessary disclosure” with a configurable reason code And unit tests verify no PII leakage when redaction is enabled
Benchmark Provenance & Audit Trail
"As a compliance officer, I want a transparent audit trail of benchmarks and calculations so that external reviewers can trust the results and we meet regulatory standards."
Description

Tracks and displays benchmark sources, cohort filters, version identifiers, and calculation parameters used for each variance result. Logs changes with who/when, stores snapshots for reproducibility, and exposes an audit view in the report for payers and QA. Notifies admins when benchmark versions update and offers selective re-computation tools.

Acceptance Criteria
Provenance Details Visible in Variance Lens Report
Given a variance result is displayed in the Variance Lens report When the user expands the Provenance panel Then the panel shows: benchmark source name, source type (publication/registry/internal), source date, cohort filters (diagnosis code(s), age range, severity, device/platform), benchmark version ID, calculation parameters (window length, outlier rule, smoothing), and computation timestamp for that result And each field is non-empty and matches the stored metadata for the result ID And the panel displays a unique Provenance ID that can be referenced in communications
Immutable Change Log and Snapshots for Variance Inputs
Given any change to benchmarks, cohort filters, or calculation parameters is saved When the change is committed Then an append-only audit entry is recorded with: actor user ID, timestamp (UTC), affected entity IDs, fields changed with before/after values, reason note (optional), and correlation ID And a computation snapshot is stored capturing inputs, benchmark version, parameters, and resulting variance values, sufficient for deterministic recomputation And the audit entry is tamper-evident via a hash chain and cannot be edited or deleted via the UI
Role-Gated Audit View Embedded in Report
Given a payer reviewer or QA user opens a patient's Variance Lens report When they select the Audit tab Then they can view provenance details and the full change log timeline for the results displayed And unauthorized roles cannot access the Audit tab and receive a 403 error And the audit view can be exported as PDF and JSON, each including the Provenance ID, hash, and generation timestamp
Admin Notifications on Benchmark Version Updates
Given a new benchmark version is published or imported into the system When the ingestion completes Then all org admins receive an in-app notification within 1 minute and an email within 5 minutes And the notification includes: benchmark name, old/new version IDs, summary of changes (added/removed cohorts, parameter deltas), affected diagnoses, estimated impacted reports count, and links to changelog and recomputation tool And an admin can acknowledge or mute the notification; the system records acknowledge timestamp and actor
Selective Re-computation with Safe History Preservation
Given an admin opens the Re-computation tool for a benchmark version change When they filter by diagnosis, cohort, clinic, date range, and select target reports Then the tool displays a dry-run impact estimate (count of affected reports, expected average variance delta) before execution And upon confirm, the system recomputes results with the new benchmark version, writes new results, and preserves prior snapshots unchanged and cross-referenced And the job shows progress, handles retries up to 3 times on transient failures, and produces a completion summary with counts of updated, skipped, and failed items
Automated Reproducibility Checks
Given stored computation snapshots exist When the nightly job samples at least 1% (minimum 50) snapshots and re-runs computations with the recorded versions and parameters Then 99.9% of re-computations match the stored result within an absolute difference <= 0.001 or relative difference <= 0.1% And any breach of the threshold raises an alert to admins and creates a blocker bug with attached sample cases

Bundle Composer

Build episode-based packages in minutes. Define duration, visit caps, telehealth check‑ins, remote‑monitoring days, and add‑ons, then auto‑link each bundle to Stripe products and prices. Prebuilt templates for common protocols keep setup simple while guardrails (refunds, grace periods, late‑cancel rules) prevent billing mismatches. Clinics standardize offerings, reduce errors, and launch bundles without chasing spreadsheets.

Requirements

Episode Bundle Configuration Engine
"As a clinic administrator, I want to configure episode bundles with duration, visit caps, telehealth check-ins, monitoring days, and add-ons so that patient schedules, nudges, and billing align without manual setup."
Description

Provide a guided composer to define episode duration (days/weeks), visit caps (in‑person and video), telehealth check‑in cadence, remote‑monitoring days (data collection windows), and optional add‑ons (e.g., extra visit packs, DME). Enforce field dependencies and constraints, offer contextual help and defaults from selected protocol templates, and render a live preview of the patient‑facing schedule. Persist a structured configuration schema for downstream scheduling, nudges, and analytics. Integrate with MoveMate’s scheduling, telehealth, and remote‑monitoring modules so selected options automatically drive appointment templates, check‑in reminders, and data capture windows.

Acceptance Criteria
Configure Episode Duration
Given the composer is open, When the user selects a start date and enters a duration value N and unit U (days or weeks), Then the system computes and displays end_date = start_date + (N * (U == weeks ? 7 : 1)) - 1 day. Given N < 1 or total_days > 365, When the user attempts to save, Then the save is blocked and an inline error states: "Duration must be 1–365 days (1–52 weeks)." Given the user changes the duration unit (days ↔ weeks), When the unit is changed, Then the numeric value remains as entered and the end date is recalculated accordingly, and the preview reflects the new end date.
Set Visit Caps and Enforce Scheduling Limits
Given in_person_cap = X and video_cap = Y where X,Y are integers ≥ 0, When the user saves the bundle, Then the configuration persists both caps. Given a scheduler attempts to add a visit of a type whose scheduled count in the episode would exceed its cap, When the add action is initiated, Then the add is blocked and a message states: "Cap exceeded for [type]." Given cap == 0 for a visit type, When appointment templates are generated, Then templates for that type are not created for this episode and the preview shows 0 allowed for that type. Given caps are updated in the composer, When viewing the live preview, Then visit totals and remaining counts update immediately to reflect the new caps.
Configure Telehealth Check-ins and Remote Monitoring Windows
Given the user sets a telehealth check-in cadence (Weekly on specific weekday[s] or Every N days), When the configuration is saved, Then check-in occurrences are scheduled only within [start_date, end_date] and none occur beyond end_date. Given the user selects remote-monitoring days (e.g., Mon, Wed, Fri), When the configuration is saved, Then data capture windows are created on each matching date within the episode; if no days are selected, the remote-monitoring module is disabled for this episode. Given the cadence or selected days are changed, When the preview renders, Then the number and positions of check-ins and monitoring windows update to match the current settings.
Manage Optional Add-ons (Extra Visit Packs, DME)
Given the user enables an Extra Visit Pack add-on with quantity Q for visit_type T, When the configuration is saved, Then the add_ons array persists an item {type: "extra_visit_pack", visit_type: T, quantity: Q}. Given an Extra Visit Pack add-on exists and is later purchased, When the scheduling module applies add-ons, Then the applicable visit cap increases by Q without altering the original episode caps. Given the user selects a DME add-on with a valid SKU, When the configuration is saved, Then the add-on persists with {type: "DME", sku: SKU} and does not change visit caps. Given an add-on is incompatible with the bundle delivery mode (e.g., clinic-only add-on with in_person_cap = 0), When attempting to save, Then the save is blocked and an error identifies the incompatible add-on.
Live Patient-Facing Schedule Preview Updates
Given the user changes any of: start date, duration, visit caps, telehealth cadence, remote-monitoring days, or add-ons, When the user blurs the field or stops typing, Then the preview recalculates and updates within 300ms. Then the preview displays at minimum: start and end dates, totals for in-person and video visit caps, scheduled telehealth check-ins, and the count of remote-monitoring windows per week. Given one or more inputs are invalid, When rendering the preview, Then affected sections are clearly flagged and excluded from totals until the errors are resolved.
Apply Protocol Template Defaults and Contextual Help
Given the user selects a protocol template, When applied, Then all fields defined by the template populate with default values and show a template indicator. Given the user edits a templated field, When a value differs from the template default, Then the field is marked as Overridden and a Reset to template action is available. Given the user opens a field's help, When viewing the tooltip/panel, Then contextual guidance and allowed ranges/constraints are shown for that field. Given the user clicks Reset to template, When confirmed, Then overridden fields revert to the template values.
Persist Configuration Schema and Trigger Integrations
Given the user clicks Save on a valid configuration, When the save succeeds, Then a versioned configuration object is persisted with required keys: bundle_id, version, start_date, end_date, duration_value, duration_unit, visit_caps {in_person, video}, telehealth_checkins {pattern, parameters}, remote_monitoring {days}, add_ons[], template_id (optional), created_by, created_at. Then downstream setup is triggered: appointment templates are created per visit caps; telehealth reminders are scheduled per check-in cadence; remote-monitoring windows are created per selected days. If any downstream setup fails, When saving, Then the transaction is rolled back (no partial persistence) and the user sees an error naming the failing module. Given a saved configuration, When fetched via API, Then it matches exactly what was saved and linked downstream resources exist for the episode.
Stripe Auto-Linking & Price Sync
"As a clinic owner, I want bundles to automatically create and sync Stripe products and prices so that billing stays accurate and hands-off across checkout and renewals."
Description

On publish, automatically create or update corresponding Stripe Products and Prices for each bundle and add‑on, including billing interval, currency, tax code, trial/grace periods, proration behavior, and cancellation policy metadata. Maintain a mapping table and webhook-driven sync so in‑app price changes propagate to Stripe and external changes are detected and flagged. Support multiple price points per bundle (self‑pay, insurance, promotional) and region‑specific tax settings. Ensure idempotency, environment separation (test/live), and robust error handling with rollback. Expose read‑only price references to checkout and the patient portal to guarantee consistent billing across episodes.

Acceptance Criteria
Publish Creates/Updates Stripe Products & Prices
Given a bundle or add-on with configured billing interval, currency, tax code, trial/grace days, proration behavior, and cancellation policy metadata When a user clicks Publish in Bundle Composer Then a Stripe Product exists or is updated with correct name, description, and metadata (bundle_id/addon_id, environment, cancellation_policy) And Stripe Price objects are created or updated with correct unit_amount, currency, billing_interval, tax_behavior/tax_code, trial/grace days, proration setting, and linked to the Product And a mapping table row exists per Price with Stripe product_id and price_id, price_type (self-pay/insurance/promotional), region, currency, and environment And repeated Publish with unchanged data results in no duplicate Stripe objects and no duplicate mapping rows (idempotent) And an audit log entry is recorded with operation outcome and Stripe request identifiers
Multiple Price Points with Region Tax
Given a bundle defines self-pay, insurance, and promotional price points across US and EU regions When the bundle is published Then a distinct Stripe Price exists per price_type x region x currency with correct unit_amount and currency (e.g., USD, EUR) And tax behavior and tax code reflect region-specific rules (e.g., inclusive in EU, exclusive in US) and are stored on the Price metadata And the mapping table links each price_type and region to its Stripe price_id and product_id And attempting to publish duplicate price_type+region+currency combinations is blocked with a validation error and no Stripe objects are created
In-App Price Change Syncs to Stripe
Given a live bundle with existing Stripe Product and Prices When a user edits price amount, billing interval, tax settings, trial/grace days, or proration behavior and saves Then a new Stripe Price is created reflecting the new settings (do not mutate existing Prices) And prior Stripe Prices for that price_type+region are marked inactive And Product metadata is updated when relevant fields change And the mapping table is versioned to the new price_id and the old mapping is archived And the change propagates to Stripe within 30 seconds or the operation is reported as failed And transient Stripe errors are retried up to 3 times with exponential backoff and final outcome is logged
External Stripe Changes Detected and Flagged
Given a Stripe Product or Price linked in the mapping table is modified directly in the Stripe Dashboard When the system receives product.updated or price.updated webhook events (or during scheduled reconciliation) Then differences from the mapping (fields changed and old/new values) are recorded And the affected bundle/add-on in MoveMate is flagged with an Attention Required status and a Resolve action And the system does not auto-overwrite Stripe or local data And choosing Resolve allows the user to either accept Stripe as source of truth (update mapping) or re-publish from MoveMate (restore) with the action logged
Environment Isolation and Idempotency
Given the application environment is Test or Live When publishing bundles or add-ons Then only the corresponding Stripe API keys/mode are used and created objects reside in that environment And mapping records include environment and prevent cross-environment ID reuse And the same publish request with the same idempotency key does not create duplicate Products or Prices And attempting to reference Live stripe_product_id or price_id in Test (or vice versa) is blocked with an explicit error
Robust Error Handling and Rollback
Given any step of publish or sync encounters a network error, Stripe API error, validation failure, or partial success When the error is detected Then newly created Stripe objects for this operation are rolled back (deleted when allowed or set inactive) and local mapping changes are not committed And the bundle publish status is marked Failed with a user-visible message including Stripe request_id and error code And the system provides a Retry action and automatically retries transient failures up to 3 attempts with exponential backoff And all actions are audit-logged with correlation_id and timestamps
Read-Only Price References in Checkout & Patient Portal
Given a patient starts checkout or views an episode in the portal linked to a published bundle When pricing is displayed or used for payment Then the UI reads price amount, currency, and interval from the stored Stripe price_id(s) and confirms the amount with Stripe And price fields are read-only in both checkout and portal And existing episodes continue to charge against the originally assigned price_id; new episodes use the latest active price_id at time of enrollment And if a mismatch between stored amount and Stripe is detected, checkout is blocked with an instruction to Resolve or republish
Protocol Templates Library & Editor
"As a physical therapist, I want prebuilt protocol templates I can quickly customize so that I can launch standardized bundles in minutes without deep billing knowledge."
Description

Provide a curated library of prebuilt protocol templates (e.g., ACL reconstruction, rotator cuff repair, low back pain) with recommended durations, visit caps, telehealth cadence, monitoring days, and default add‑ons. Allow clinics to clone, customize, tag, and save templates; support field‑level locking for clinic standards. Include template versioning, change diffs, and compatibility validation against billing guardrails. Enable quick‑start creation from a template with one‑click apply and immediate preview. Integrate with the composer, scheduling defaults, and clinician guidance to standardize offerings and reduce setup time.

Acceptance Criteria
View Prebuilt Protocol Templates with Required Fields
Given I am a clinic admin on the Protocol Templates Library When I open the "ACL Reconstruction" template Then I see populated fields for Duration (in weeks), Visit Cap (count), Telehealth Check-in Cadence (allowed values), Remote-Monitoring Days (per week), and Default Add-ons (list) And each field displays units and passes format validation (non-negative integers where applicable, allowed enums for cadence) And the library includes at least these templates: ACL Reconstruction, Rotator Cuff Repair, Low Back Pain
Clone, Customize, Tag, and Save Template
Given I select the "Rotator Cuff Repair" template in the library When I click Clone Then a new Draft template opens titled "Copy of Rotator Cuff Repair" with a unique ID When I change Duration to 12 weeks, Visit Cap to 16, add tags "Outpatient" and "Shoulder", and click Save Then the template saves successfully, appears in the library with the new title and tags, and an audit entry records user, timestamp, and changed fields And required-field validation blocks Save if any required field is empty or invalid And navigating away with unsaved changes triggers a confirmation prompt
Field-Level Locking for Clinic Standards
Given I have admin permissions When I lock the fields Duration and Visit Cap on a custom template Then lock indicators appear and the fields become read-only for non-admin users And attempts by non-admins to edit those fields in both the template editor and after applying the template in the Composer are blocked with a tooltip "Locked by clinic standards" And only admins can unlock or modify locked fields, with all lock/unlock actions recorded in the audit log
Template Versioning, Publish, and Diffs
Given a Published template exists at version v1.0.0 When I create a new version and change Telehealth Cadence from weekly to biweekly and add a new add-on Then the new version is saved as v1.1.0 in Draft state And the diff view lists field-level changes with old and new values (e.g., Telehealth Cadence: weekly -> biweekly; Add-ons: +[new add-on]) When I publish v1.1.0 Then v1.1.0 becomes Published and v1.0.0 is marked Deprecated but remains viewable and usable by existing bundles And selecting Revert to v1.0.0 creates v1.2.0 in Draft with v1.0.0 values
Guardrail Compatibility Validation on Save/Publish
Given clinic billing guardrails (refunds, grace periods, late-cancel rules, visit caps per duration) are configured When I attempt to Save or Publish a template that violates a guardrail (e.g., Visit Cap exceeds clinic maximum for the specified duration) Then a validation panel lists each error with the field, rule violated, and a remediation suggestion And Publish is blocked until all errors are resolved And warnings (non-blocking) require entering a justification to proceed and are captured in the audit log
One-Click Apply to Composer with Immediate Preview and Stripe Linking
Given I am viewing the "Low Back Pain" template card When I click "Apply to Composer" Then a bundle draft opens pre-populated with the template values within 2 seconds And the preview displays linked Stripe Product and Price for the bundle and default add-ons, or prompts selection if missing And confirming creates the bundle draft without validation errors, while Cancel returns to the library with no changes saved And fields locked in the template remain read-only in the Composer
Scheduling Defaults and Clinician Guidance Auto-Configuration
Given a template has Telehealth Cadence = weekly and Remote-Monitoring Days = 5 When I apply the template and the Composer opens Then scheduling defaults prefill weekly telehealth check-ins across the episode duration and 5 monitoring days per week And clinician guidance tooltips display rationale and locked-field notices on relevant fields And edits to non-locked fields update the schedule preview in real time
Billing Guardrails & Policy Enforcement
"As a compliance manager, I want guardrails that validate refunds, grace periods, and late-cancel rules so that bundles cannot be published with billing mismatches."
Description

Centralize refundable windows, grace periods, late‑cancel/no‑show rules, and bundle termination scenarios, applying them as real‑time validations and publish‑time checks. Provide warnings and blocking errors when configuration conflicts with policy (e.g., refund period exceeds billing cycle, visit cap lower than required check‑ins). Include a what‑if simulator that runs typical patient timelines to detect billing mismatches and edge cases. Store guardrail metadata alongside Stripe price metadata to keep finance, clinical, and legal policies aligned across the app and checkout.

Acceptance Criteria
Publish Block: Refund Window Exceeds Billing Cycle
Given a bundle with billing_cycle_days = 30 And policy.refund_window_days = 35 When the user clicks Publish Then the publish action is blocked And an error banner with code BG-REFUND-001 displays "Refund window (35 days) cannot exceed billing cycle (30 days)" And the refund_window_days field is highlighted in error state And the Publish button remains disabled until refund_window_days <= 30 And no Stripe Product or Price is created or updated
Real-Time Validation: Visit Cap Lower Than Required Check-ins
Given a bundle template requires 4 telehealth check-ins per episode And the user sets visit_cap = 3 When the configuration is saved as a draft Then a field-level blocking error with code BG-VISIT-002 displays "Visit cap (3) cannot be lower than required check-ins (4)" And Save Draft is allowed but Publish is disabled And the error is cleared immediately when visit_cap >= required_check_ins
Warning for Non-Blocking Policy Overlap (Grace Period vs Late-Cancel)
Given grace_period_days = 7 And late_cancel_window_hours = 24 with late_cancel_fee_amount = 25 And the first appointment is scheduled within the grace period When the user clicks Publish Then publish proceeds And a warning with code BG-WARN-003 displays "Grace period (7 days) may waive late-cancel fees within first 7 days" And the warning is recorded in the bundle audit log And the warning appears in the Publish Review modal with a Proceed option
What-If Simulator: Detect Billing Mismatch on Early Termination
Given a bundle with duration_days = 30, visit_cap = 8, required_check_ins = 4, refund_window_days = 10, proration_policy = prorate_unused_days, late_cancel_fee_amount = 25 And a simulator scenario "Terminate on day 12 after 2 completed visits and 1 late cancel" When the simulator is run Then the simulator outputs expected charges, refunds, and fees per policy And flags any mismatch where calculated refund > amount paid or fee is applied during grace period And if a mismatch is found, the bundle cannot be published and an error BG-SIM-004 is shown with a link to the offending policy And a downloadable JSON report of the scenario inputs and outputs is generated
Stripe Price Metadata Sync for Guardrails
Given a bundle is published with finalized policies When the Stripe Product and Price are created or updated Then the Stripe Price metadata includes keys: refund_window_days, grace_period_days, late_cancel_window_hours, late_cancel_fee_amount, no_show_fee_amount, termination_policy, visit_cap, required_check_ins, remote_monitoring_days, policy_version And the metadata values exactly match the bundle configuration And on updates, metadata is changed in-place without creating a new Price unless amount or interval changes And a read-after-write check confirms metadata presence and equality within 5 seconds And on any failure, the publish/update is rolled back and error BG-STRIPE-005 is displayed
Checkout Alignment with Guardrail Metadata
Given a patient initiates Stripe Checkout for a published bundle When the checkout session is created Then the session metadata mirrors the Stripe Price guardrail metadata keys and values exactly And the Checkout confirmation page displays a policy summary sourced from these metadata values And if any required key is missing or mismatched, session creation is blocked and error BG-CHK-006 is returned to the app And the attempt is logged with the offending keys/values
Draft/Publish Workflow with Version Control
"As an operations manager, I want a draft-to-publish workflow with versioning and approvals so that changes are controlled and traceable without disrupting active patients."
Description

Implement a lifecycle with Draft, Review, Approved, Published, and Deprecated states, including role‑based permissions and optional two‑person approval. Track full change history with timestamps and user IDs; generate immutable bundle version IDs referenced in patient episodes and Stripe metadata. Support safe edits via new versions and guided migration flows to move active patients when allowances and pricing are compatible. Provide status indicators, diff view, and rollback to previous versions without disrupting active episodes or analytics.

Acceptance Criteria
Role-Based Draft Creation and Edit Controls
- Given a user with permission Bundle:Manage, When they create a new bundle, Then a record is created in state "Draft" within 2 seconds and the API returns 201 with bundleId. - Given a user without permission Bundle:Manage, When they attempt to create or edit a Draft, Then the API returns 403 and no changes are persisted. - Given a Draft bundle, When an authorized user saves edits, Then an audit entry with ISO-8601 timestamp, userId, and field-level diffs is recorded and the draft revision number increments by 1. - Given a Published bundle, When any user attempts to edit it directly, Then the action is blocked, an option "Create New Version" is presented, and no modification to the Published record is saved.
Two-Person Approval Enforcement
- Given org setting twoPersonApproval=true, When a bundle in state "Review" is approved, Then two distinct users with permission Bundle:Approve must each approve and the creator cannot count as one; each approval logs userId and timestamp. - Given only one approval present, When attempting transition to "Approved" or "Published", Then the API returns 409 and the UI shows "1 of 2 approvals". - Given twoPersonApproval=false, When a user with Bundle:Approve approves from "Review", Then a single approval transitions the bundle to "Approved" and logs isSingleApproval=true. - Given the same user attempts to approve twice, Then the second attempt is rejected with 409 and no duplicate approval is stored.
State Transition Rules and Audit Trail Integrity
- Allowed transitions only: Draft→Review, Review→Approved, Approved→Published, Published→Deprecated, Published→NewVersion(Draft). Any other transition returns 400 and no state change occurs. - Every state change writes an immutable audit record capturing versionId, fromState, toState, userId, ISO-8601 timestamp, and optional reason; audit records are read-only (modification attempts return 405). - GET /bundles/{id}/audit returns a complete, chronological list within 2 seconds for histories up to 1,000 entries. - A bundle cannot be deleted while any version is Published; attempts return 409 with guidance to deprecate first.
Immutable Version IDs and Stripe Metadata Sync
- On first transition of a version to "Published", Then the system generates an immutable versionId (ULID or UUIDv4), persists it, and exposes it via API and UI. - When a version is Published, Then patient episodes created or updated to that version store versionId, and Stripe Product/Price metadata include key versionId with the same value. - If any Stripe product/price creation or metadata update fails, Then the publish operation is rolled back atomically: state remains not Published, no patient episode links are written, and an error with correlationId is returned. - VersionId uniqueness is enforced across all bundles; attempts to reuse or mutate a versionId return 409/400.
Safe Edit via New Version and Guided Migration
- Given a Published version, When a user selects "Create New Version", Then a new Draft is created with cloned fields, a new provisional versionId, and a linkage to the source version; the source remains Published and unchanged. - The migration flow lists all active patient episodes on the source version, computing eligibility based on compatible allowances and pricing (no decrease in remaining allowances and identical billing cadence and price); the UI shows counts eligible/ineligible with reasons. - When migrating eligible episodes, Then episodes point to the new version, Stripe subscriptions are updated without double-charging, and guardrails (refunds, grace periods, late-cancel rules) are applied according to org settings. - Migration actions are logged with episodeId, fromVersionId, toVersionId, userId, and timestamp; in case of partial failures, the system retries idempotently and reports a per-episode status.
Diff View and Rollback Without Disruption
- Given two versions of the same bundle, When opening Diff, Then field-level changes (duration, visit caps, telehealth check-ins, remote-monitoring days, add-ons, guardrails, linked Stripe IDs) are displayed with added/removed/modified markers, and the view loads within 2 seconds for bundles ≤50 fields. - Selecting "Rollback" from a Published version creates a new Draft identical to the selected prior version; upon Publish, the new version becomes Published and the previously Published version becomes Deprecated automatically. - Active patient episodes remain on their current version after rollback unless explicitly migrated; historical analytics continue to attribute metrics to the original versionIds with no backfill or mutation. - Rollback is blocked with a clear error if required Stripe products/prices cannot be created or linked; the UI lists missing prerequisites.
Status Indicators, Filters, and Webhooks
- Bundle list and detail views display a text badge for state (Draft/Review/Approved/Published/Deprecated) and the versionId; badges meet WCAG AA color-contrast guidelines and update within 5 seconds of a state change. - API GET /bundles and GET /bundles/{id} include state, versionId, publishedAt; clients can filter by state via query param state= and by versionId; results are consistent with UI. - Webhooks fire on transitions to Approved, Published, and Deprecated with payload {bundleId, versionId, fromState, toState, occurredAt}; delivery is retried with exponential backoff for up to 24 hours. - Exports and reports include versionId and state columns, ensuring alignment between financial analytics and patient episode data.
Price Calculator & Revenue Forecast
"As a finance lead, I want a price calculator and revenue forecast so that I can assess pricing, margins, and schedule fit before publishing a bundle."
Description

Provide a dynamic calculator that aggregates base bundle price, add‑ons, and policy effects (grace periods, refunds, prorations) to display patient cost and clinic revenue over time. Show per‑visit effective rate, expected telehealth check‑in utilization, and monitoring day costs, with sensitivity toggles for adherence and cancellations. Validate that price coverage matches configured caps and schedule, and surface margin warnings. Export forecast summaries for leadership and feed key metrics into MoveMate analytics and Stripe checkout pages.

Acceptance Criteria
Real-time Bundle Price Aggregation
Given a bundle has a base price, one or more add-ons, and billing policies (grace period days, refund percentages, proration rules) And clinic currency and tax settings are configured When a user edits any of base price, add-ons, or policies in Bundle Composer Then the calculator updates patient total and projected clinic revenue within 1 second of the change And a line-item breakdown shows base, each add-on, and each policy effect with labels and amounts And displayed Patient Total and Clinic Revenue equal the sum of breakdown lines within $0.01 And amounts display in the clinic currency code And if any required input is missing, the totals area shows "Incomplete configuration" and the Export action is disabled
Per-Visit Effective Rate, Telehealth Utilization, and Monitoring Costs
Given a bundle has visit caps, expected telehealth check-ins, and remote-monitoring days configured And a forecast period is selected When the calculator runs Then it displays Per-Visit Effective Rate = (forecasted collected patient payments minus refunds) / forecasted attended visits, to two decimals And it displays Expected Telehealth Check-in Utilization (%) based on configured cadence and adherence assumption And it displays Monitoring Day Costs (total and per-day) based on configured per-day rate and days And all derived metrics recompute immediately when any relevant input changes
Sensitivity Toggles for Adherence and Cancellations
Given adherence and late-cancel/no-show sensitivity controls are available (adherence 50–100%, late-cancel 0–30%) When a user adjusts any sensitivity control Then projected visit counts, telehealth check-in utilization, refunds/late-cancel fees, and clinic revenue recalculate immediately And the forecast chart and summary totals reflect the current sensitivity selection And preset buttons (Best, Base, Worst) apply predefined sensitivity combinations And a Reset action restores the default clinic assumptions
Cap and Schedule Coverage Validation
Given visit caps, telehealth check-in caps, and monitoring day caps are defined for the bundle And a patient schedule exists for the episode duration When the forecast is computed Then any scheduled item exceeding a configured cap is flagged and excluded from revenue calculations And under-utilization >10% of any cap triggers a notice suggesting parameter review And validation status must be "Valid" before Export and Stripe Checkout actions are enabled
Margin Warning Thresholds
Given direct cost inputs exist (per-visit cost, per-check-in cost, per-monitoring-day cost, platform fee %) And a target margin % is configured at the clinic level When the forecast is computed Then Gross Margin = (forecasted revenue − forecasted direct costs) / forecasted revenue is calculated And if Gross Margin < target margin, a yellow "Low margin" warning displays; if ≤ 0, a red "Negative margin" warning displays And the warning includes a breakdown of the top 3 cost drivers by impact And warnings clear automatically when margins rise above thresholds
Export and Integrations: Forecast Summary, Analytics, and Stripe Checkout
Given the forecast validation status is "Valid" When a user exports the forecast Then CSV and PDF files are generated containing bundle metadata, assumptions, line-item breakdown, patient total, clinic revenue by week, per-visit rate, telehealth utilization, monitoring costs, timestamps, and version And the MoveMate analytics service receives a payload with the same metrics and returns a 2xx response within 2 seconds And a Stripe Checkout session can be created (in test mode) with mapped product/price IDs and computed amounts in minor currency units And Stripe Checkout totals match the calculator within $0.01

Bundle Checkout

A patient‑friendly Stripe checkout that launches from a SnapCode or secure link. Supports Apple Pay/Google Pay, HSA/FSA cards, installments, and subscriptions with clear what’s‑included summaries and e‑sign terms. Confirmation flows straight back to MoveMate, attaching receipts to the episode and unlocking the program instantly—cutting front‑desk calls and first‑day drop‑offs.

Requirements

SnapCode & Secure Link Checkout Launch
"As a patient starting a therapy program, I want to scan a code or tap a secure link to pay quickly so that I can begin my exercises without calling the clinic."
Description

Enable patients to initiate Stripe Checkout from a scannable SnapCode (QR) or a secure short link tied to a specific clinic, episode, and bundle. Generate signed, expiring, optionally single-use tokens to prevent link sharing and enforce access control. Auto-detect device to open in-app webview or default browser with graceful fallback if the camera or app is unavailable. Prefill patient identifiers (name, email, episode ID) via encrypted metadata while keeping PHI out of Stripe where possible. Include rate limiting, abuse detection, and deep-link return to MoveMate post-transaction. Provide analytics parameters (UTM/source/clinician) for conversion tracking without exposing sensitive data.

Acceptance Criteria
Signed, Expiring, Single-Use Token Enforcement
Given a checkout link created with a signed token tied to clinicId, episodeId, and bundleId and TTL=30 minutes When the link is opened within TTL Then a Stripe Checkout session is created and loads successfully. Given the same link When opened after TTL expires Then the user sees a "Link expired" message with a "Request new link" CTA and no session is created. Given a token marked single-use When it is used to create a Stripe session Then subsequent attempts with the same token return HTTP 410 Gone and no new session is created. Given a token configured as multi-use with maxUses=5 When used a 6th time Then the 6th attempt returns HTTP 429 Too Many Requests and no new session is created. Given any attempt where clinicId/episodeId/bundleId in the URL do not match the token payload When validation runs Then the request is rejected with HTTP 400 and the event is logged as an abuse attempt.
SnapCode (QR) Launch With Graceful Fallback
Given a device with camera permission granted When the user scans a valid SnapCode Then the secure short link opens within 2 seconds and navigates to Stripe Checkout. Given camera permission denied or camera hardware unavailable When the user attempts to scan Then the UI displays the short link and a copy/share button, and tapping the link opens checkout. Given the MoveMate app is installed When the link is opened Then the in-app webview opens; otherwise the default browser opens. Given a malformed or unknown QR payload When scanned Then an error is shown and no request to Stripe is made.
Prefill and PHI Minimization in Stripe Checkout
Given a valid token When a Stripe Checkout session is created Then the session is prefilled with customer name and email only. And the session metadata contains an encrypted blob with episodeId, clinicId, bundleId, and attribution parameters. And metadata excludes diagnosis codes, date of birth, address, notes, and any free-text fields. And none of episodeId/clinicId/bundleId are rendered on any Stripe-hosted UI or receipt. Given the webhook with session_id is received When MoveMate decrypts metadata Then the receipt is attached to the correct episode.
Post-Transaction Deep Link and Program Unlock
Given a successful payment When Stripe redirects via success_url with session_id Then MoveMate deep-links the user to mm://episode/{episodeId}/program and shows the program unlocked within 10 seconds of webhook receipt. And a receipt PDF is fetched and attached to the episode with amount, date, and last4. Given a canceled payment When cancel_url is triggered Then MoveMate returns the user to the app/browser with the episode unchanged and shows a retry CTA. Given a payment fails When Stripe posts the payment_failed event Then no unlock occurs and a support contact option is presented.
Rate Limiting and Abuse Detection
Given repeated token validation requests from the same IP or device exceed 10 per minute When the threshold is crossed Then further requests receive HTTP 429 with a Retry-After header of 60 seconds. Given 5 invalid or expired token attempts within 10 minutes When the threshold is crossed Then the token is revoked and a clinician dashboard alert is generated. Given the same token is used from two different autonomous systems (ASNs) within 2 minutes When the second attempt occurs Then it is blocked or challenged with CAPTCHA and logged. Given any rate-limit or block event When it occurs Then it is logged with correlationId, hashed tokenId, truncated client IP, and timestamp and visible to admins within 5 minutes.
Analytics Parameters Without Sensitive Data
Given UTM parameters (utm_source, utm_medium, utm_campaign) and clinician alias are present When the short link is generated Then they are appended to the link and stored only in encrypted metadata and server-side logs. Given checkout loads When the Stripe host page renders Then no UTM or clinician alias appears on customer-facing UI or receipts. Given success or cancel webhooks When analytics events are emitted Then events include UTM fields, clinician alias, session_id, bundleId and exclude name, raw email (use SHA-256 hash), phone, and diagnosis. Given the analytics dashboard When data is processed Then conversion metrics by UTM/source/clinician are visible within 60 minutes with >99% event match to Stripe sessions.
Device Detection and Webview/Browser Routing
Given iOS with MoveMate installed and Apple Pay available When the secure link is opened Then checkout opens in SFSafariViewController with the Apple Pay button visible. Given Android with MoveMate installed and Google Pay available When the secure link is opened Then checkout opens in Chrome Custom Tabs with the Google Pay button visible. Given a desktop browser When the link is opened Then checkout opens in the default browser with card and installments options and no mobile wallet buttons. Given device detection fails When the link is opened Then checkout opens in the default browser and the session creation still succeeds.
Multi-Tender Payments (Wallets, HSA/FSA, Installments, Subscriptions)
"As a patient, I want to choose wallets, HSA/FSA, installments, or a subscription so that I can pay in a way that matches my budget and coverage."
Description

Support Apple Pay, Google Pay, major cards, and HSA/FSA cards through Stripe with proper merchant domain verification and wallet entitlements. Allow clinics to configure installment plans (e.g., BNPL providers supported by Stripe) and subscriptions (weekly/monthly program access) per bundle. Surface eligibility and total cost over time, store the default payment method for recurring charges, and handle SCA/3DS challenges. Respect clinic-level toggles to enable/disable methods and ensure no PAN storage in MoveMate (PCI scope reduced via Stripe-hosted Checkout). Provide fallback paths if wallets are unavailable and clearly indicate HSA/FSA usage guidance on receipts.

Acceptance Criteria
Wallet Checkout with Apple Pay/Google Pay
Given the clinic has enabled Wallets and completed Stripe merchant domain verification and wallet entitlements When a patient on a compatible device opens Stripe-hosted Checkout from a SnapCode or secure link Then the Apple Pay or Google Pay button is displayed with correct merchant name and bundle total And selecting the wallet and authenticating completes payment via Stripe without exposing PAN to MoveMate And any required SCA/3DS challenge is presented and, on success, the payment is captured And MoveMate receives checkout.session.completed and within 10 seconds unlocks the bundle program and attaches the Stripe receipt to the patient’s episode
HSA/FSA Card Acceptance and Receipt Guidance
Given the clinic has enabled HSA/FSA acceptance in Stripe and the bundle is marked HSA-eligible When a patient pays using an HSA/FSA card via Stripe Checkout Then the payment is processed on HSA/FSA rails supported by Stripe And the receipt stored in the episode includes an HSA/FSA usage note and itemized eligible items And if the card is declined due to ineligible MCC or restrictions, a clear error explains why and offers alternative payment options
Installment Plan Eligibility, Disclosure, and Selection
Given the clinic has configured at least one Stripe-supported installment provider for the bundle And the patient is in a supported locale and meets provider eligibility When the patient opens Checkout Then eligible installment options show APR (if applicable), number of payments, due dates, and total cost over time And selecting an installment plan updates the order summary with today’s due amount and the payment schedule And ineligible patients see the option hidden or disabled with a non-blocking explanation And MoveMate stores the selected installment plan identifier on the episode and displays the schedule in billing
Subscription Setup with Default Payment Method and Recurring Charges
Given the clinic has configured a weekly or monthly subscription for the bundle in Stripe When the patient completes checkout for the subscription Then Stripe creates a customer with a default payment method stored in Stripe only (no PAN in MoveMate) And the initial payment is captured and the subscription is set to active And MoveMate records the Stripe customer ID, subscription ID, and default payment method fingerprint And future renewals trigger SCA/3DS when required and, on success, renewal receipts link to the same episode within 10 seconds
Clinic-Level Payment Method Toggles Enforcement
Given specific payment methods are disabled in clinic settings When the patient opens Stripe Checkout for any bundle Then disabled methods (wallets, HSA/FSA, installments, subscriptions) are not displayed or selectable And only enabled methods are shown consistently across web and mobile And an audit log entry records the effective payment methods configuration used at checkout
SCA/3DS Challenge Handling and Recovery
Given the selected payment method requires SCA/3DS When the challenge is initiated during Checkout Then the challenge modal is presented and, on successful authentication, the payment is captured And if authentication fails or times out, Checkout shows an actionable error and allows retry or method change without losing context And MoveMate does not unlock access or attach receipts unless a succeeded payment event is received
Wallet Unavailable Fallback Path
Given wallets are enabled but the device or browser is not eligible for Apple Pay or Google Pay When the patient opens Stripe Checkout Then wallet buttons are not displayed and a standard card entry form is shown And the card form supports major cards and HSA/FSA per clinic configuration And successful payment via the fallback path triggers the same unlocking and receipt attachment as wallet payments
Clear Bundle Summary & Pricing Transparency
"As a patient, I want a clear breakdown of what I’m buying and the total cost so that I can make an informed decision before I pay."
Description

Display a concise, localized “what’s included” summary with session counts, program access, and any add-ons for the selected bundle. Provide itemized pricing, taxes/fees, discounts, and promo code entry with validation. For installments and subscriptions, show per-period amount, term, total cost, renewal date, and cancellation policy before payment. Support clinic-configurable bundle definitions and dynamic price pulls from Stripe. Ensure readability and accessibility (WCAG AA), plain-language explanations, and currency formatting. Confirm final totals prior to authorization to reduce support calls and drop-offs.

Acceptance Criteria
Display What's Included Summary
Given a patient opens Bundle Checkout for a selected bundle When the checkout page loads Then a "What's included" section is visible above the pricing breakdown And the section lists the bundle name, total session count with unit, program/app access duration, and each add-on name with quantity And the content exactly matches the clinic-configured bundle definition for that bundle and location And the text is localized to the user's locale (e.g., en-US, fr-CA) and uses plain language And no item in the section overflows or truncates on mobile viewports >= 320px width
Itemized Pricing and Promo Code Validation
Given the checkout has loaded pricing for the selected bundle When the pricing is displayed Then a "Price breakdown" shows base price, add-ons, taxes/fees, discounts (if any), subtotal, and total And all amounts are formatted per the user's locale and currency, matching Stripe unit_amount/100 rounding exactly And when a promo code is entered and submitted, if valid and applicable, then the corresponding Stripe discount is applied and totals recalculate immediately And if the promo code is invalid, expired, inapplicable, or usage-limited, then an inline error explains the reason and no discount is applied And totals always equal the sum of displayed line items with a difference of $0.00 And recalculation occurs within 300 ms after promo application
Installment Plan Disclosure and Confirmation
Given the bundle offers an installment plan When the user selects "Pay in X installments" Then the UI displays per-period amount, number of payments, payment schedule dates (first charge today in the clinic's timezone), and total paid over the term And any installment fees are itemized and included in the total And a plain-language cancellation/refund policy for installments is shown above the Pay button And the Pay/Authorize button remains disabled until the user checks "I understand the payment schedule and total cost" And the created Stripe PaymentIntent amount equals the "Due today" amount exactly And if the selected payment method does not support installments, then the option is disabled with an explanation
Subscription Terms and Renewal Transparency
Given the bundle is a subscription When the checkout loads Then the UI displays per-period price, billing interval (e.g., monthly), next renewal date in the user's timezone, and any trial length with the first charge date And a plain-language cancellation policy and how to cancel are visible above the Pay button And the "Due today" amount correctly reflects trial status (0.00 during trial, otherwise first period + taxes/fees) And the Pay/Authorize button remains disabled until the user accepts the subscription terms checkbox And the created Stripe Subscription and initial PaymentIntent amounts match what is displayed
Dynamic Price Pull and Integrity with Stripe
Given clinic-configured Stripe Price IDs exist for the selected bundle When the checkout initializes Then prices are fetched from Stripe server-side and used to build all displayed amounts And if Stripe returns an error or prices cannot be fetched, then pricing is hidden, payment actions are disabled, and a retry message is shown And if a discrepancy is detected between displayed totals and Stripe-calculated totals, then checkout is blocked and an error is displayed until values are consistent And all amounts shown to the user are derived from Stripe objects without client-side overrides And the currency shown matches the Stripe Price currency
Accessibility, Readability, and Localization
Given a keyboard-only or screen-reader user views checkout When navigating the summary and pricing Then all interactive controls are reachable in a logical focus order with visible focus indicators And color contrast meets WCAG 2.2 AA; non-text contrast meets AA And price breakdowns and totals have semantic roles/labels and are announced correctly by screen readers And error/success messages for promo codes are announced via aria-live And explanatory text (what's included, policies) scores at or below grade 8 on Flesch-Kincaid readability And currency and dates format per the user's locale (e.g., $1,234.56 en-US; 1 234,56 $ fr-CA) with an accessible currency label
Final Total Review Before Authorization
Given any bundle type (one-time, installment, subscription) When the user proceeds to pay Then a Review & Confirm section surfaces the final "Due today" amount, itemized taxes/fees, and, when applicable, the future charge schedule And the user must explicitly confirm before the Pay/Authorize button is enabled And the PaymentIntent/SetupIntent amount created equals the confirmed "Due today" amount And if the Stripe amount changes between confirmation and authorization, then the payment is aborted, the UI shows a mismatch error, and the user must re-confirm updated totals And a confirmation snapshot (items, totals, currency, timestamp) is stored and associated with the episode
E-sign Terms & Financial Consent Capture
"As a patient, I want to read and e-sign the payment and subscription terms so that I understand my obligations and can authorize recurring charges confidently."
Description

Present required legal documents (payment terms, subscription terms, cancellation/refund policy, recurring authorization) during checkout with explicit consent capture. Support typed name or drawn signature, acceptance checkbox, timestamp, IP, user agent, and document versioning. Block payment submission until consent is recorded. Generate a finalized PDF of the signed terms and attach it to the patient’s episode in MoveMate. Maintain a tamper-evident audit trail and multi-language variants, with clinic-specific addenda when configured.

Acceptance Criteria
Block Payment Until Consent Is Captured
Given a patient is on the Bundle Checkout payment step And required legal documents are displayed (payment terms, subscription terms if applicable, cancellation/refund policy, recurring authorization when applicable) When any required consent is missing (unchecked acceptance checkbox or missing signature) Then the primary payment action (Pay/Subscribe) is disabled And any attempt to submit via API returns HTTP 422 with error_code="CONSENT_REQUIRED" and a list of missing fields And inline, accessible error messages are shown adjacent to each missing consent control And when all required consents are provided Then the primary payment action becomes enabled and API submission no longer returns CONSENT_REQUIRED
Signature Capture and Metadata Recording
Given the patient chooses signature entry When the patient selects "Typed" and enters a non-empty name (≥2 characters) Then a signature record is saved with signature_type="typed", signed_name, timestamp (ISO 8601 UTC), ip_address, user_agent, and consent_checkbox=true And the exact text of each accepted document is bound via document_version_id and content_hash When the patient selects "Drawn" and draws a signature meeting minimum area threshold Then a signature record is saved with signature_type="drawn", signed_image (PNG or vector), timestamp (ISO 8601 UTC), ip_address, user_agent, and consent_checkbox=true And the signature preview matches the stored artifact And both signature modes pass on mobile Safari/Chrome and desktop Chrome/Edge/Firefox
Document Versioning and Clinic Addenda Binding
Given the clinic has configured current terms and optional clinic-specific addenda When the checkout renders legal documents Then the displayed documents include the active version identifiers (e.g., semver or UUID) and effective dates And any configured clinic addendum is appended and clearly labeled And upon consent, the stored consent record includes document_version_id, addendum_version_id (if present), and content_hash for each document shown And subsequent changes to clinic terms create a new version without altering previously signed records
Generate and Attach Finalized Signed PDF
Given the patient has provided all required consents When checkout completes successfully Then a single finalized PDF is generated that includes rendered document content, clinic addenda (if any), signed_name or drawn signature image, acceptance checkbox state, timestamp (ISO 8601 UTC), ip_address, user_agent, document_version_id(s), and content_hash And the PDF file is stored immutably and linked to the patient's episode in MoveMate And the episode record shows a downloadable "Signed Terms & Financial Consent" artifact And the PDF can be re-downloaded and its hash matches the stored content_hash
Tamper-Evident Audit Trail
Given a consent is captured When the system persists the record Then an audit entry is created with a unique audit_id, actor (patient or authorized payer), timestamp (ISO 8601 UTC), ip_address, user_agent, document_version_id(s), consent fields, and a cryptographic hash (e.g., SHA-256) of the finalized PDF And audit entries are append-only and cannot be edited; any attempted update results in a new audit entry linked via parent_audit_id And a verification endpoint recomputes the PDF hash and returns status "verified" when matching, otherwise "tampered"
Multi-language Variant Selection and Fallback
Given the patient has a preferred language from Accept-Language or explicit selector When legal documents are rendered Then the corresponding language variant and translation version are displayed And the consent record stores language_code and translation_version_id And switching language clears prior consent state and re-renders the documents in the new language And if a requested language variant is unavailable, the system falls back to English and displays a non-blocking notice And all language variants contain semantically equivalent clauses (validated by matching canonical clause IDs)
Recurring Authorization for Subscriptions and Installments
Given the patient selects a subscription or installment plan When terms are presented Then a recurring payment authorization clause is prominently displayed and requires explicit acceptance (checkbox) And payment submission is blocked unless the recurring authorization checkbox is accepted And the consent record includes recurring_authorization=true and the plan identifier And the finalized PDF includes the recurring authorization text and acceptance state And for one-time purchases, recurring_authorization is not required and remains false
Real-time Confirmation, Receipt Attachment & Instant Program Unlock
"As a clinician, I want successful payments to automatically unlock the program and attach receipts so that care can start immediately without manual steps."
Description

Integrate Stripe webhooks (e.g., checkout.session.completed, payment_intent.succeeded, invoice.paid) with idempotent handlers to confirm payment, map sessions to patients/episodes, and update entitlements. Immediately unlock the purchased program and mark the episode as active upon confirmed payment, including handling SCA/3DS and delayed capture states. Attach itemized Stripe receipts and signed terms to the episode; include healthcare-friendly descriptors and codes when provided to support HSA/FSA documentation. Send confirmation notifications to patient and clinician, and log all events for auditability and support.

Acceptance Criteria
Immediate Program Unlock on Confirmed Payment
Given a verified Stripe event of type checkout.session.completed with payment_status = "paid" or payment_intent.succeeded or invoice.paid mapped to a MoveMate episode When the webhook is received and processed Then the episode status is set to Active within 5 seconds of webhook receipt And the purchased program entitlements are enabled immediately for the patient account And a confirmation timestamp and the source Stripe event.id and payment identifier are persisted on the episode And duplicate unlocks are prevented by checking existing confirmation state before applying changes
Robust Session-to-Patient/Episode Mapping
Given a checkout session initiated via SnapCode or secure link containing patient_id and episode_id in metadata or a reference token resolvable to those IDs When the corresponding Stripe webhook is received Then the system resolves to exactly one patient and one episode within the same clinic tenant And if resolution fails or is ambiguous, the episode is not unlocked and a mapping_error is logged with correlation IDs And the mapping decision (matched IDs, method used, and source metadata) is stored for audit And lookups prefer direct metadata, then signed link reference, then Stripe customer mapping; cross-tenant matches are rejected
Idempotent Webhook Processing
Given Stripe may deliver duplicate or out-of-order events for the same PaymentIntent or Invoice When the webhook handlers process events concurrently or repeatedly Then each business outcome (unlock, attachment, notification) occurs at most once per PaymentIntent/Invoice And deduplication is enforced using Stripe event.id and payment identifiers persisted in an idempotency ledger And out-of-order events that do not advance state are acknowledged without side effects And the handler returns 2xx only after the event is safely recorded and enqueued, with retries supported on transient failures
SCA/3DS and Delayed Capture Handling
Given a payment in states requires_action, processing, or requires_capture When receiving events such as checkout.session.completed with payment_status != "paid" or payment_intent.requires_action/processing/requires_capture Then the episode remains locked and a pending_payment state is stored on the episode And no notifications of unlock are sent When a subsequent payment_intent.succeeded or invoice.paid is received for the same transaction Then the episode is unlocked and program entitlements are updated within 5 seconds And if payment_intent.payment_failed or invoice.payment_failed is received, the episode remains locked and a failure reason is recorded
Receipt and Signed Terms Attachment to Episode
Given a confirmed payment (payment_intent.succeeded or invoice.paid) for a bundle checkout When processing the event Then an itemized receipt (line items, quantities, unit price, taxes, discounts, totals, currency) is attached to the episode as a retrievable artifact (PDF or durable link) And signed terms/consent evidence from Checkout (acceptance timestamp, IP, user agent when available) is attached to the episode And healthcare descriptors and codes provided in product/price metadata (e.g., descriptors, CPT/HCPCS or internal codes) are included on the receipt and confirmation artifacts; if absent, a clinic-approved default descriptor is used And attachments are immutable, permissioned (patient and clinician read), and include checksum, created_at, and Stripe correlation IDs
Confirmation Notifications to Patient and Clinician
Given the episode has been unlocked following a confirmed payment When the unlock is recorded Then the patient receives a confirmation via their preferred channel(s) within 10 seconds including program name, start date, receipt link, and support contact And for subscriptions, the message includes next billing date and manage-subscription link And the assigned clinician receives a notification within 30 seconds containing patient, episode, program, and payment summary And notifications are deduplicated per transaction, retried with exponential backoff on failure, and all delivery outcomes are logged
End-to-End Audit Logging and Monitoring
Given any Stripe payment lifecycle event relevant to checkout and fulfillment When the event is processed Then the system logs the raw event payload, signature verification outcome, correlation IDs (event.id, payment_intent, invoice), and processing timestamps And each state change (mapping, entitlement updates, attachments, notifications) is recorded with before/after values, actor=system, episode_id, and patient_id And audit records are immutable, queryable by patient, episode, Stripe IDs, and time; queries return within 1 second for the last 30 days of data And alerts trigger on webhook signature verification failures, processing errors, and SLA breaches for unlock (>5 seconds) or notification delivery (>30 seconds)
Refunds, Cancellations & Proration Management
"As a clinic admin, I want to manage refunds and subscription changes so that billing stays accurate and patient access reflects the current payment status."
Description

Provide a clinic-admin console to issue full/partial refunds, cancel or pause subscriptions with proration rules, and define grace periods. Sync changes with Stripe and immediately adjust MoveMate access (revoke, pause, or set end-of-term). Generate credit memos/updated receipts and notify patients. Enforce clinic policy constraints and maintain a complete audit log (actor, time, reason, amounts). Handle edge cases such as partial settlements, disputes, and subscription payment failures with clear status syncing.

Acceptance Criteria
Full Refund of One‑Time Bundle Within Policy Window
Given a completed one-time bundle purchase within the clinic’s refundable window and an admin with refund permission When the admin submits a full refund with a mandatory reason Then a Stripe refund for 100% of the captured amount is created and succeeds And MoveMate program access tied to the episode is revoked within 30 seconds And an updated receipt showing the refund is generated and attached to the episode within 30 seconds And a patient notification (email/SMS/in‑app) is queued within 30 seconds And an immutable audit log entry records actor, timestamp, refund amount, currency, reason, pre/post access state, Stripe charge/refund IDs And the order status in the console updates to Refunded within 30 seconds
Partial Refund for Subscription Mid‑Cycle With Proration
Given an active subscription with an invoice for the current period and clinic policy allows partial refunds When the admin issues a partial refund amount within the refundable limit and selects a reason Then proration is calculated against refundable invoice items and a Stripe partial refund is created for the specified amount And MoveMate access remains active and unchanged And a credit memo/updated receipt reflecting the partial refund line items is generated and attached within 30 seconds And a patient notification describing the partial refund is queued within 30 seconds And the console displays the net period charge after refund And an immutable audit log entry captures actor, timestamp, reason, original amount, refund amount, remaining balance, and Stripe IDs
Immediate vs End‑of‑Term Cancellation With Grace Period
Given an active subscription and clinic-defined proration and grace-period settings When the admin selects Immediate cancellation Then Stripe subscription is canceled immediately and any unused time is refunded per proration rules And MoveMate access is revoked within 30 seconds unless a grace period is configured, in which case access continues until grace end And updated receipt/credit memo is generated and attached within 30 seconds And patient notification with effective date/time is queued within 30 seconds And an audit log entry records actor, timestamp, cancellation type, proration/refund amounts, grace window, and Stripe subscription ID When the admin selects End-of-term cancellation Then Stripe cancel_at_period_end is set true And MoveMate access remains active until the current term end, then auto-revokes within 30 seconds And patient notification includes the final access date And audit log records the scheduled end date
Pause Subscription With Configurable Resume Date
Given an active subscription and a clinic policy permitting pauses When the admin pauses the subscription with a specified start time and resume date Then Stripe subscription is set to pause collection per the selected timing And MoveMate access state is set to Paused within 30 seconds and cannot start new sessions during pause And no invoices are collected during the pause window And on resume date, Stripe resumes collection and MoveMate access auto-restores within 30 seconds And patient notifications are queued for pause start and resume events And an audit log entry records actor, timestamp, pause start, resume date, reason, and Stripe subscription ID
Policy Constraints and Non‑Refundable Items Enforcement
Given clinic refund policy rules (windows, non-refundable items, role-based overrides) When an admin attempts a refund or cancellation that violates policy Then the UI blocks the action with a specific error message identifying the violated rule And no Stripe API calls are made And an audit log entry records actor, timestamp, attempted action, reason provided, policy rule violated, and outcome Blocked When an authorized user with override permission proceeds with an override and provides a mandatory justification Then the action executes, is flagged as Override in audit log, and notifications/receipts are generated accordingly
Disputes and Partial Settlement Handling
Given a charge related to a MoveMate purchase When Stripe sends a dispute.opened webhook for the charge Then the system ingests the event, updates the order state to Disputed, and applies clinic-configured access policy (Pause or Allow) within 60 seconds And the admin and patient are notified per clinic settings When the dispute is won or lost Then access and financial records are updated accordingly (reinstate/refund), receipts/credit memos are generated, and audit log entries added with Stripe dispute ID and outcomes When a charge was only partially captured/settled Then the UI shows the maximum refundable amount based on captured value and prevents refund requests exceeding that limit And attempted over-refunds are blocked and logged
Subscription Payment Failure and Grace Processing
Given an active subscription and configured dunning/grace settings When Stripe sends an invoice.payment_failed event Then a grace period is started immediately per clinic settings and MoveMate access remains Active during grace, switching to Paused/Revoked when grace expires And automated dunning notifications and retries are scheduled And if payment later succeeds before grace expiration, access remains Active and status updates to Paid within 60 seconds And if grace expires without recovery, Stripe subscription status updates per policy (paused or canceled) and MoveMate access is revoked within 30 seconds And all state transitions, retries, and outcomes are captured in the audit log with timestamps and Stripe invoice/payment intent IDs
Abandoned Checkout Recovery & Reminders
"As a patient who got distracted, I want a gentle reminder with a secure link so that I can easily return and complete my checkout."
Description

Track initiated but incomplete checkouts and send time-boxed, secure reminder links via SMS/email with patient opt-in and clinic-configurable cadence. Preserve token security (refresh on send) and avoid over-messaging with frequency caps. Provide a lightweight help prompt and clinic contact options to reduce drop-offs. Offer conversion analytics to clinics and MoveMate to quantify recovery and optimize timing/messages.

Acceptance Criteria
Detect Abandoned Checkout After Inactivity
Given a patient launches Bundle Checkout via SnapCode or secure link When no successful payment is recorded within 15 minutes of session start and the session is closed or idle Then create a single Abandoned Checkout record with timestamp, clinic, program/bundle, device type, and channel And associate the record to the patient (or provisional contact) and episode if known And store only the Stripe session ID and non-PCI metadata; no card data is persisted And do not create a duplicate Abandoned record for the same Stripe session
Send Secure Reminder Links with Opt‑In Respect
Given a patient has an Abandoned Checkout record and channel-level consent status is known When a reminder is due Then send the reminder only via channels where explicit opt-in exists (SMS and/or email) And generate a fresh single-use tokenized resume link tied to the abandoned session and patient And set token expiry to the clinic-configurable window (default 48 hours; range 1–72 hours) And include clinic name, bundle title, and secure link; exclude PHI and payment details from the message body When the patient clicks the link Then resume the checkout in Stripe with allowable prefilled data and no exposure of payment information
Enforce Reminder Cadence and Frequency Caps
Given a clinic-configurable cadence is set (e.g., 1h, 24h, 72h after abandonment) When scheduling reminders Then schedule according to the cadence in the patient’s local time zone And enforce a maximum of 3 reminders per abandoned checkout And enforce a cross-episode cap of 1 reminder per patient per 24 hours And do not send reminders outside 08:00–20:00 local time unless the clinic explicitly overrides this setting
Auto‑Cancel and Suppress on Completion or Opt‑Out
Given pending reminders exist for an abandoned checkout When a successful payment is completed for the same bundle/episode Then immediately cancel all pending reminders and mark the abandonment as Recovered When the patient replies STOP to SMS or clicks Unsubscribe in email Then record channel-specific opt-out and stop all future reminders on that channel When a reminder hard-bounces or fails delivery 3 times in 72 hours Then suppress further sends to that address/number and flag the contact for review
Help Prompt and Clinic Contact on Reminder Landing
Given a patient opens a reminder resume link When the landing screen loads Then show a lightweight "Need help?" prompt with tap-to-call clinic phone and tap-to-email clinic address And display a short FAQ link explaining common checkout issues And log Help Viewed, Call Tapped, and Email Tapped events without storing message contents And ensure the help prompt does not block or delay the Resume Checkout action (<200ms additional load time)
Conversion Analytics and Attribution
Given clinic staff opens the Abandoned Checkout Recovery dashboard When a date range and clinic are selected Then display counts for Abandoned, Reminders Sent, Unique Patients Reminded, Recoveries, Recovery Rate, Revenue Recovered, and Median Time-to-Recovery And allow filtering by bundle/program and channel (SMS/email) And attribute a Recovery when a payment occurs within 7 days of abandonment and the patient either clicked a reminder link within the prior 48 hours or used the same device/session fingerprint And provide CSV export of the visible dataset And restrict data visibility to the viewing clinic; MoveMate admins may view cross-clinic aggregates without patient identifiers
Token Security, Expiry, and Abuse Protections
Given reminder resume links use signed tokens When a token is generated Then make it single-use, bound to patient and abandoned session, and set to expire per policy And rotate the token on every send; prior tokens become invalid immediately When an expired or invalid token is used Then show a generic safe error with no PHI and offer a request-new-link action And rate-limit link validations to 10 attempts per minute per IP and 5 invalid attempts per token before temporary lockout (15 minutes) And do not log tokens in plaintext; audit logs store hashed references, sender, channel, and timestamps

Milestone Billing

Align payments to progress, not just time. Charge in installments tied to clinical milestones (phase gates, adherence streaks, pain score drops) or safe time triggers. If Safety Sentinel pauses a plan, charges auto‑hold until resumed. Patients see exactly why and when charges occur, boosting trust; clinics get predictable cash flow with fewer disputes.

Requirements

Milestone Definition Engine
"As a clinician, I want to define billing milestones based on patient progress indicators so that charges align with clinical outcomes rather than arbitrary dates."
Description

Configurable engine to define billable milestones from clinical signals captured by MoveMate (e.g., phase gates, adherence streaks, pain score deltas, rep totals, plan completion percentages). Supports rule authoring with thresholds, time windows, and boolean logic (e.g., 10-day adherence streak AND ≥2-point pain reduction). Allows multiple milestones per plan, ordering, dependencies, and versioning so changes don’t retroactively alter previously achieved milestones. Exposes a validation/simulation mode to test milestone rules against historical patient data. Emits normalized milestone events with patient, plan, evidence, timestamp, and rule version for downstream billing.

Acceptance Criteria
Author milestone rule with thresholds, time windows, and boolean logic
Given I have access to the Milestone Definition Engine authoring interface When I create a rule named "Phase1_Adherence_PainCombo" with expression: adherence_streak >= 10 AND pain_delta <= -2 within 14 days And I map adherence_streak to workout logs and pain_delta to PROMs with a 14-day rolling window And I save the rule Then the system validates syntax and semantics and returns "Valid" within 2 seconds And persists the rule with a unique rule_id and version = 1 And the rule’s definition (expression, thresholds, window) is immutable for version = 1
Enforce milestone ordering and dependencies during evaluation
Given a treatment plan defines milestones M1 and M2 where M2 depends on M1 And the patient meets the criteria for M2 but has not achieved M1 When the engine evaluates milestones Then no milestone event is emitted for M2 And when the patient later meets M1’s criteria Then an event for M1 is emitted once And on the next evaluation cycle, if M2’s criteria are still satisfied, an event for M2 is emitted once
Preserve rule versioning and non-retroactive behavior
Given a milestone event E1 was emitted under rule_id=R, version=1 When an editor updates the rule R and publishes version=2 Then event E1 remains unchanged with rule_version=1 And subsequent evaluations use version=2 for new decisions And previously achieved milestones under version=1 are not re-emitted or invalidated by version=2 And the system stores both versions with distinct immutable checksums
Run simulation against historical data without side effects
Given a cohort of historical patients, a date range, and a set of milestone rules When I run the engine in Simulation mode Then the system returns per-patient hypothetical milestone hits with timestamps and evidence details And no production milestone events are emitted or persisted And the simulation output includes counts by milestone and rule_version And the operation completes within 60 seconds for 10,000 patients over 6 months of data
Emit normalized milestone events with evidence and version
Given a patient satisfies a milestone rule during evaluation When the engine emits the milestone event Then the event includes: patient_id, plan_id, milestone_id, rule_id, rule_version, achieved_at (ISO 8601 UTC), evidence[] with signal_name, value, observed_at, and evaluation_window And the event is persisted and published to the milestone events stream And the payload excludes PII beyond stable identifiers And the event conforms to the published JSON schema and passes schema validation
Support core clinical signals and handle missing data
Given a rule combines signals: phase_gate == "Phase 2" AND adherence_streak >= 7 AND pain_delta <= -1 AND rep_total >= 500 AND plan_completion_pct >= 25 within 30 days When the engine evaluates the rule for a patient Then each signal is computed from MoveMate telemetry using documented definitions and units And if any required signal is missing in the window, the rule evaluates to false and records a "missing_data" reason code in logs/telemetry And numeric comparisons use inclusive thresholds with two-decimal precision rounding at comparison time
Ensure idempotent and concurrency-safe event emission
Given two concurrent evaluators process the same patient-plan-milestone When both detect the criteria as satisfied Then only one event is persisted due to a uniqueness guarantee on (patient_id, plan_id, milestone_id, rule_version) And retries from transient failures do not create duplicate events And re-evaluations on subsequent cycles do not re-emit already achieved milestones
Installment Schedule Builder
"As a clinic admin, I want to set installment rules tied to milestones and safe time triggers so that revenue is consistent and fair during variable patient progress."
Description

Rule-based scheduler that maps installments to milestones and safe time triggers (e.g., charge on Day 14 if no milestone reached). Supports initial deposit, per-milestone amounts, maximum plan cap, minimum interval between charges, grace periods, and proration when plans are edited. Handles timezone normalization, weekends/holidays, and retry windows. Performs pre-authorization at plan start when configured and verifies available payment method. Provides plan-level and patient-level overrides and a dry-run preview showing expected charge timeline under different progress scenarios.

Acceptance Criteria
Initial Deposit, Milestone Amounts, and Plan Cap
Given a plan configured with an initial deposit amount D, milestone charges (e.g., M1=A1, M2=A2), and a maximum plan cap C When the schedule is generated upon plan activation Then a charge for D is scheduled at activation time And milestone-triggered charges are scheduled upon each milestone attainment And the cumulative captured amount never exceeds C And if a scheduled charge would exceed C, the charge amount is reduced to the remaining cap and no further charges are scheduled And the schedule stores trigger type (milestone ID) and computed amount for each installment
Safe-Time Trigger with Minimum Charge Interval
Given a plan defines a safe-time trigger (e.g., Day N if no milestone reached) and a minimum interval I days between charges When no milestone is reached by Day N Then a charge is scheduled on the earliest date that satisfies both Day N and at least I days since the last captured charge And if a milestone is reached before Day N, the safe-time charge for Day N is not scheduled And if a prior charge occurred fewer than I days earlier, the safe-time charge is postponed until the I-day interval elapses And the schedule records the safe-time trigger as the charge reason
Timezone Normalization and Weekend/Holiday Deferral
Given a patient timezone TZp and a clinic timezone TZc When charge timestamps are computed Then charge timestamps are stored in UTC and displayed in TZp And date-boundary logic (e.g., Day N) uses TZp midnight boundaries And if a computed charge date falls on a weekend or clinic holiday, it is deferred to the next business day at 09:00 in TZp And daylight saving transitions do not create duplicate or skipped charges (single intended installment per trigger) And displayed times indicate TZp explicitly
Pre-Authorization and Payment Method Verification
Given configuration requires pre-authorization of amount P at plan start When the plan is activated Then the system verifies a default payment method exists and supports pre-authorization And a pre-authorization for P is created once And if verification or pre-authorization fails, the schedule remains pending with no installments activated and an actionable error is surfaced And on success, no captures occur until a charge trigger fires And the pre-authorization hold is released or adjusted per gateway policy upon first capture And the payment method used is recorded on the schedule
Charge Retry Windows and Idempotency
Given a scheduled charge attempt fails for a retriable reason and the plan defines a retry window W hours, maximum retries R, and backoff strategy B When retries are initiated Then retries occur within W using strategy B up to R attempts And each attempt is idempotent using a stable key per schedule item to prevent duplicate captures And on success, the installment is marked Paid and no further retries occur And if retries are exhausted, the installment is marked Failed and the next triggers proceed independently And for non-retriable errors, no retries are attempted and a notification is emitted And all attempts and outcomes are audit-logged
Plan- and Patient-level Overrides and Grace Periods
Given plan-level defaults exist (deposit D, amounts, intervals, cap C, grace period G) and patient-level overrides are specified for selected fields When generating a patient's schedule Then override values are applied for the overridden fields and plan defaults for all others And an audit trail records the source (plan vs patient), author, and timestamp for each applied value And grace period G defers capture after a milestone is reached until G full days elapse, without scheduling a duplicate charge And editing overrides re-generates only future (unpaid) installments without altering posted transactions
Dry-Run Preview Across Progress Scenarios
Given a user runs a dry-run preview selecting scenarios (e.g., on-time milestones, delayed milestones, no milestones) When the preview is executed Then the system produces for each scenario a deterministic timeline listing projected charge date/time (patient local), amount, trigger type (milestone ID or safe-time), any weekend/holiday deferrals, and application of intervals, grace, and cap And preview has no side effects (no authorizations, captures, or notifications) And preview output can be exported to CSV/JSON and matches on-screen data And re-running the same inputs yields identical results
Safety Sentinel Billing Hold
"As a patient, I want billing to pause automatically when my plan is on safety hold so that I’m not charged during clinical pauses."
Description

Tight integration with Safety Sentinel so that any safety-induced plan pause automatically places upcoming charges on hold. Ensures holds propagate instantly, canceling queued charges and suppressing retries until the plan resumes. On resume, recalculates the schedule based on the latest milestone state and time offsets, with options to skip, defer, or re-sequence missed installments. Logs hold/resume reasons and notifies patients and clinic staff. Guards against edge cases such as multiple rapid pauses, plan edits during holds, and overlapping milestones.

Acceptance Criteria
Auto-Hold on Safety Pause
Given a treatment plan with at least one future scheduled installment charge, When Safety Sentinel emits a pause event for that plan, Then the plan’s billing_hold flag is set within 5 seconds of event receipt, And all queued future charges and pending retries for that plan are canceled within 5 seconds with cancel_reason="safety_hold" and zero net capture, And any open payment authorizations for the plan are voided within 60 seconds, And a Hold record is persisted with sentinel_event_id, plan_id, started_at (UTC), and reason_code.
Retry Suppression During Hold
Given a plan is on billing_hold=true, When a scheduled billing cycle, retry backoff, or gateway retry callback occurs, Then no new authorization or capture attempts are created for that plan, And gateway callbacks are acknowledged idempotently (HTTP 200) without creating transactions, And the job scheduler does not enqueue new billing jobs for the plan while hold persists, And monitoring reports 0 charge attempts for the plan per day while hold=true.
Resume Recalculation and Installment Options
Given a plan had installments missed during a hold, And the clinic’s default resume_strategy is configured (skip|defer|resequence), When the hold is lifted, Then the scheduler recomputes the remaining installment schedule within 30 seconds using the latest milestone state and hold duration, And applies the selected strategy: - skip: missed installments are marked skipped and will not be charged, - defer: missed installments are shifted forward by the hold duration, - resequence: remaining installments are evenly distributed to align with milestones and remaining plan duration, And no installment is scheduled in the past, And overlapping milestone conflicts are resolved according to priority rules (highest priority, then earliest due), And the next charge_at timestamp and rationale are surfaced via API and dashboard.
Audit Logging of Hold/Resume Events
Given a hold is placed or lifted, Then an immutable audit log entry is created containing: event_type (hold|resume), plan_id, sentinel_rule_id, reason_code, actor (system|user_id), occurred_at (ISO 8601 UTC), previous_status->new_status, affected_charge_ids[], and checksum, And logs are write-once and queryable for at least 1 year, And PII/PHI beyond plan and billing metadata is excluded or redacted.
Patient and Staff Notifications
Given a hold or resume event occurs, Then the patient receives an in-app notification immediately and an email/SMS within 2 minutes summarizing why charges are paused/resumed and the next expected billing date, And assigned clinic staff receive an in-app alert and email within 2 minutes, And notifications coalesce if multiple events occur within 60 seconds into a single summary, And notification delivery status (sent, delivered, failed) is recorded with timestamps, And notifications contain no diagnosis details or exercise data.
Idempotency for Rapid Sequential Pauses
Given multiple pause events (>=2) for the same plan arrive within 10 seconds or out-of-order, When they are processed concurrently, Then only one active hold exists for the plan, And queued charges are canceled at most once (no duplicate cancels), And system reaches a consistent state within 5 seconds after the final event, And the behavior is verified under a load test with at least 10 concurrent pause events.
Plan Edits While on Hold
Given a plan is on billing_hold=true, When a clinician edits milestones, plan duration, or installment amounts during the hold, Then no new charges are scheduled or retried as a result of the edit, And the edit is captured in change history with editor_id, fields_changed, and edited_at (UTC), And on resume, the recalculation uses the latest edited configuration, And the scheduling preview/API indicates the plan is on hold and shows the post-resume schedule based on the configured resume_strategy.
Patient Billing Timeline & Explanations
"As a patient, I want a clear timeline showing when and why I will be charged so that I can trust and plan for my expenses."
Description

Patient-facing timeline that clearly shows upcoming, pending, and completed charges with plain-language explanations of the trigger (e.g., “Adherence streak reached 10 days”). Displays milestone progress, expected dates for safe time triggers, plan cap, and any holds due to Safety Sentinel. Provides proactive notifications before and after charges, localized currency display, and accessible design. Includes a self-serve portal to update payment method, view receipts, and understand how charges map to clinical progress to build trust and reduce support load.

Acceptance Criteria
Billing Timeline Sections and Plain‑Language Triggers
Given a patient account with at least one upcoming, one pending, and one completed charge When the Billing Timeline loads Then charges are grouped under Upcoming, Pending, and Completed sections And each charge item displays amount, status, trigger explanation in plain language (e.g., "Adherence streak reached 10 days"), and timestamp And upcoming items display an expected charge date/time And completed items display a receipt link And empty states display an informative message when a section has no items
Milestone Progress, Plan Cap, and Safe Time Forecast
Given a treatment plan with clinical milestones, a plan cap, and safe time triggers When the patient views the Billing Timeline Then each milestone shows current progress (e.g., 7/10 days, 70%) and its effect on billing eligibility And time‑triggered entries show an expected charge date/time based on plan rules And the remaining plan cap (amount and count) is displayed with a visual progress indicator And any change to progress or plan rules updates the forecast within 60 seconds of data change
Safety Sentinel Hold Visibility and Auto‑Hold Behavior
Given Safety Sentinel pauses the patient’s plan When the patient opens the Billing Timeline during the pause Then all upcoming and pending charges display status "On Hold" with reason "Safety Sentinel pause" And no payment authorizations or captures occur while On Hold And pre/post charge notifications are suppressed during the hold And when the plan resumes, affected charges return to their prior state and the timeline reflects the resumed status within 60 seconds
Pre‑ and Post‑Charge Notifications
Given a time‑triggered charge scheduled for a specific date/time When it is 48 hours and 2 hours before the scheduled time Then the patient receives pre‑charge notifications (in‑app and email/SMS if enabled) including amount, currency, trigger, scheduled time, and manage‑payment link Given a milestone‑triggered charge becomes eligible When eligibility is detected Then a pre‑charge notification is sent immediately and charge capture does not occur fewer than 5 minutes after this notification Given any charge is successfully captured When capture completes Then a receipt notification with a receipt link is sent within 2 minutes
Localized Currency and Number Formatting
Given the patient’s locale and currency settings When amounts are displayed in the Billing Timeline and notifications Then values use the correct currency symbol, ISO code, decimal precision, and thousands/decimal separators for the locale And switching the locale updates all displayed amounts and formats without a page reload And if a currency is unsupported, amounts default to USD with an inline notice explaining the fallback
Accessibility and Readability Compliance
Given WCAG 2.1 AA requirements When navigating the Billing Timeline with keyboard only Then all interactive elements are reachable in a logical order with visible focus indicators And screen readers announce section headers, charge statuses, amounts, and trigger explanations via correct roles and labels And dynamic updates (e.g., holds, progress changes) are announced via ARIA live regions And text and UI elements meet a minimum 4.5:1 contrast ratio and tap targets are at least 44x44px
Self‑Serve Payments and Receipts Portal
Given the patient opens the self‑serve portal When updating a payment method Then inputs validate format in real time and on submit, and on success the new method becomes default for future charges And on failure (e.g., card declined), an actionable error with retry guidance is shown and no charges are attempted When viewing receipts Then completed charges list with downloadable PDF receipts and a "Why was I charged?" explanation that maps to the related milestone/safe time trigger and timestamp And all portal operations (update method, load receipts) complete within 5 seconds on a typical 4G connection
Payment Gateway Integration & Dunning
"As a billing manager, I want robust payment processing with automated retries and multiple payment methods so that collections are reliable with minimal manual work."
Description

Integration with payment processors (e.g., Stripe) supporting cards, ACH, HSA/FSA, Apple Pay/Google Pay, and tokenized vault storage. Implements SCA/3DS flows, idempotent charge creation, webhook handling, and robust error mapping. Provides configurable dunning with tiered retry schedules, smart routing for soft vs. hard declines, card updater services, and patient notifications. Supports partial captures, refunds, and voids tied to milestone events. Ensures PCI scope reduction, secure key management, and reconciliation exports for finance.

Acceptance Criteria
SCA/3DS and Wallet/ACH Authorization for Milestone Charge
Given a milestone invoice with a card requiring SCA, When the patient authorizes the charge, Then 3DS is initiated and the charge is only captured after a successful challenge or frictionless authentication with the result recorded on the invoice. Given Apple Pay or Google Pay is selected, When the wallet returns a payment token, Then the charge is created using the tokenized PAN, SCA is satisfied per wallet rules, and the invoice transitions to Paid within 5 seconds of processor success. Given ACH is selected, When the bank account is verified (instant or micro‑deposit), Then the debit is initiated, the invoice shows Pending Settlement, and final settlement status is updated via webhook within 3 business days. Given an HSA/FSA card is used, When eligibility rules are evaluated, Then ineligible items are declined with a clear patient message and eligible items are approved with HSA-compliant receipt fields (merchant name, date, amount, description).
Idempotent Charge Creation on Network Retries
Given a milestone invoice ID and idempotency key, When a charge request is retried within 24 hours, Then only one processor charge exists and the API returns the original result with the same gateway transaction ID. Given concurrent duplicate submissions occur, When at most two requests are in flight, Then zero duplicate captures are created and a single successful charge event is logged. Given a previous attempt failed due to a transient error, When a new idempotency key is used, Then a new charge is created and all events are correlated to the same invoice.
Webhook Verification and Resilient Event Handling
Given a payment, refund, dispute, or payout webhook is received, When the signature is verified and the event type is supported, Then the event is processed exactly once and applied to the correct tenant and invoice. Given the processor is temporarily unavailable, When delivery retries occur, Then the system retries with exponential backoff for up to 72 hours and moves unprocessed events to a dead‑letter queue with alerts within 5 minutes. Given an event is processed, When completion occurs, Then 99.9% of events are applied within 10 seconds end‑to‑end and a tamper‑evident audit log is recorded with old→new status.
Dunning and Decline Routing with Patient Notifications
Given a soft decline (e.g., insufficient funds), When the first attempt fails, Then up to 4 retries occur over 14 days (1h, 24h, 3d, 10d), the card updater service runs before each retry, and the patient is notified before and after each attempt. Given a hard decline (e.g., lost/stolen or pickup card), When the attempt fails, Then no automatic retries occur, the payment method is flagged unusable, the patient is prompted to add a new method, and the clinic dashboard shows Action Required within 5 minutes. Given a treatment plan is paused by Safety Sentinel, When an invoice is in dunning, Then all retries are auto‑held within 5 minutes and only resume after plan resume, preserving schedule offsets. Given a valid new payment method is added during a dunning window, When a retry is pending, Then the next attempt runs within 15 minutes and, on success, the invoice status becomes Paid with notifications to patient and clinic.
Tokenized Vault Storage and PCI Scope Reduction
Given a patient enters card details, When checkout is presented, Then PAN/CVV entry occurs only in processor-hosted fields and no raw PAN/CVV is stored or logged by MoveMate. Given tokenized payment data is stored, When accessed by services, Then only gateway or network tokens are used; encryption keys are managed in KMS, rotated at least every 90 days, and access is auditable. Given a security assessment is performed, When PCI scope is evaluated, Then the implementation qualifies for SAQ A with evidence of no PAN in logs, error traces, backups, or analytics payloads.
Partial Captures, Refunds, and Voids Tied to Milestones
Given an authorization of $X exists for a milestone, When the milestone is partially completed for $Y ≤ $X, Then a partial capture of $Y occurs and the remaining amount is released within 24 hours. Given a captured charge exists, When a milestone is rolled back or over-collected, Then a full or partial refund can be issued with reason codes mapped to the milestone ID and appears on the patient receipt. Given a charge is authorized but not captured, When a void is requested before settlement cutoff, Then the authorization is voided and the invoice returns to Open with no funds captured. Given multiple adjustments are requested, When concurrent capture/refund operations occur, Then operations are serialized per charge and the final balance never goes below $0 or above the authorized amount.
Finance Reconciliation Export with Processor Mapping
Given daily reconciliation is scheduled, When the export runs at 02:00 UTC, Then CSV and SFTP files are produced containing invoice ID, milestone ID, processor charge ID, fees, gross, net, payout ID/date, and GL account mappings. Given processor payouts are received, When exports are validated, Then the sum of net amounts matches payout totals for the period within $0.01 variance or an alert is raised to finance within 15 minutes. Given an export is re-run for the same date range, When idempotency is applied, Then identical content and checksum are produced and duplicate SFTP files are not created.
Consent & Regulatory Compliance
"As a clinic owner, I want explicit patient consent and compliant billing flows so that we meet legal requirements and avoid disputes."
Description

Explicit in-app consent flow for milestone-based billing that itemizes installment logic, caps, refund/cancellation policies, and safe time triggers. Captures timestamp, jurisdiction, language, and plan version, with easy access to terms and revocation controls. Aligns with healthcare billing norms and privacy constraints, ensuring PHI is not exposed in payment artifacts beyond necessity. Supports regional regulations (e.g., SCA, state consumer protection) and retention policies. Presents compliant receipts and disclosures to minimize disputes and audit risks.

Acceptance Criteria
Consent Screen Itemization & Safe Triggers Disclosure
Given a patient enrolls in milestone-based billing When the consent screen is presented Then the screen itemizes: each milestone and charge amount, maximum billing cap per plan, refund/cancellation policy summary with link to full terms, safe time triggers and conditions, Safety Sentinel auto-hold behavior, and dispute/contact info And the Agree action is disabled until all disclosure sections are viewed and required acknowledgments are checked And jurisdiction-specific notices are displayed based on detected region and selected language
Consent Capture & Audit Metadata
Given the user taps Agree to proceed with milestone-based billing When consent is recorded Then the system stores an immutable record with: user ID, plan ID and version (semver), app version, UTC timestamp (ISO 8601), jurisdiction (ISO 3166-2), language (IETF BCP 47), IP and device fingerprint hash, consent text hash (SHA-256), and agent (self/clinician) And the record is retrievable in an audit log within 5 seconds and exportable as PDF and JSON And the record persists across app reinstall and adheres to regional retention policy
Regional Auth, Disclosures, and Retention Compliance
Given a region requires SCA/3DS and specific consumer-protection disclosures When enabling milestone-based billing Then the flow invokes SCA (e.g., 3DS2) and completes only on successful challenge or valid exemption; otherwise billing is not enabled And region-specific notices, cooling-off periods, and cancellation rights are displayed and enforced And retention rules are applied so consent and billing artifacts are stored for the required minimum and purged or anonymized after the maximum allowed per jurisdiction with an auditable retention log
PHI Minimization in Payment Artifacts
Given a charge, invoice, receipt, or payment-processor metadata event When payment artifacts are generated or transmitted Then no PHI (e.g., diagnosis, pain scores, images, clinical notes) is present And only allowed fields are included: pseudonymous patient ID, plan code, milestone ID/name, amount, currency, timestamps, payment instrument last4/brand, and receipt URL And automated static/dynamic checks block transmission and log incidents if disallowed fields are detected
Consent Revocation & Billing Halt
Given a patient opens Consent & Billing settings When they select Revoke Consent and confirm Then future milestone charges are blocked immediately and schedules are disabled And a revocation record with UTC timestamp, jurisdiction, language, and optional reason is stored and visible in the audit log And resuming billing requires re-consent through the compliant consent flow
Compliant Receipts & Transparent Charge Rationale
Given a milestone charge executes When the payment succeeds Then a receipt is generated within 60 seconds including: milestone achieved or time-trigger event, date/time, amount, cumulative total vs cap, remaining cap, plan version, terms link, clinic legal entity, and dispute contact And the receipt is localized to the user’s language and currency and is accessible in-app and via email And refunds or voids produce credit notes linked to the original receipt and consent record
Safety Pauses Auto-Hold & Time-Trigger Handling
Given Safety Sentinel pauses a care plan When billing evaluation runs Then all pending and future milestone or time-trigger charges are placed on hold until the pause is lifted And the patient-facing billing timeline shows the hold status and reason And on resume, only eligible charges per terms are re-evaluated; no retroactive charges apply for paused periods; time-triggers exclude paused duration
Audit Trail, Receipts, and Dispute Handling
"As support staff, I want detailed billing records and a clear dispute workflow so that I can resolve patient questions quickly and fairly."
Description

End-to-end audit trail linking each charge to the underlying milestone event, rule version, and clinical evidence snapshot (e.g., adherence metrics, pain scores) with immutable timestamps. Generates itemized receipts explaining the trigger and amount. Provides a dispute workflow with reason codes, charge holds, evidence packaging, partial refunds, and SLA tracking. Supports exportable logs for clinics, role-based access controls to protect PHI, and tooling for support staff to annotate cases and resolve issues quickly and fairly.

Acceptance Criteria
Immutable Charge Audit Linking to Milestone and Rule Version
Given a charge is generated by Milestone Billing When the charge is created Then an audit record is appended containing charge_id, patient_id, plan_id, milestone_id, milestone_type, rule_version_id, trigger_type, evidence_snapshot_id, created_at (ISO 8601 UTC ms), actor, event_type="charge.created" Given an existing audit record When any update attempt is made via API or UI Then the system rejects with 409 Conflict, no existing record content changes, and a new audit event "audit.write_denied" is appended Given the audit log When integrity verification runs Then each record's content_hash matches its stored value and the chain_hash validates sequence order Given clinic admin selects a date range and patient filter When exporting audit logs Then CSV and JSON exports with the above fields are produced within 60 seconds, capped at 100k rows per file, with UTC timestamps and a file checksum Given a Safety Sentinel pause or resume event occurs for a plan with pending charges When the event is received Then an audit event "charge.hold_applied" or "charge.hold_released" is appended referencing the affected charge_ids
Itemized Receipt Generation and Delivery
Given a charge is finalized When payment is authorized or captured Then an itemized receipt is generated with receipt_id, invoice_number, line_items (description, qty, unit_price, amount), subtotal, tax, discounts, total, currency, payment_method_masked, rule_version_id, milestone_name, trigger_explanation, and event_timestamp Given the receipt is generated When delivery occurs Then it is available in-app immediately and delivered via email/SMS (if consented) within 60 seconds with localized currency and date format Given a receipt When the patient opens it Then a "Dispute this charge" link is present and routes to the dispute form pre-filled with charge_id
Dispute Initiation with Reason Codes and Auto-Hold
Given a posted charge When a user with role Patient, Clinician, or Billing Admin submits a dispute with a reason code from {Incorrect Milestone, Data Error, Financial Hardship, Service Not Rendered, Other} and optional notes/attachments Then the charge status becomes "On Hold" within 10 seconds, payment capture is voided if pending or a temporary hold is recorded if settled, and an audit event "dispute.opened" is appended Given a dispute is opened When notifications are sent Then clinic admins and support agents receive an in-app alert and email within 60 seconds containing dispute_id, charge_id, reason_code, SLA_due_at (48h), and patient identifier Given a disputed charge When subsequent billing runs Then no further installments are captured for that charge until the dispute is resolved
Evidence Packaging for Disputes
Given an open dispute When a support agent clicks "Package Evidence" Then within 30 seconds the system generates a ZIP containing: evidence.json (adherence metrics for the relevant window, pain score trend, milestone event details, rule_version_id, audit trail refs), clinical_snapshot.pdf (redacted per requester role), associated receipts, and a manifest with SHA-256 hashes Given an evidence package When a recipient downloads it Then access is logged and the package expires after 30 days or on dispute closure, whichever is later
Partial Refunds and Resolution Outcomes
Given a dispute is under review When a support agent approves a partial refund amount between 0 and the captured amount Then the system issues the refund within 60 seconds, updates charge status to "Partially Refunded", appends "refund.processed" to the audit log, and generates an amended receipt noting refund reason and amount Given a dispute is resolved When the outcome is recorded as Upheld, Denied, or Partial Then the SLA timer stops, dispute status transitions to Closed, and final notifications are sent to patient and clinic within 60 seconds Given a partial refund is processed When ledger reconciliation runs Then gross, net, fees, and refund amounts balance to zero variance for the charge
Role-Based Access Controls and Redaction
Given role-based access policies When a Support Agent views a dispute Then PHI fields (DOB, free-text clinical notes, raw media) are redacted by default and only summaries are shown; attempting to access raw PHI returns 403 unless an explicit case authorization token is present Given a Billing Admin accesses evidence When they open clinical_snapshot.pdf Then they see de-identified metrics (scores, counts) but no free-text clinical notes or media; Clinicians with an active treating relationship can view full clinical evidence Given any evidence or audit retrieval When access occurs Then access is checked against role and purpose, the outcome is logged with user_id, role, purpose, resource_id, timestamp, and IP, and three consecutive 403s for a user trigger a security alert
Support Case Annotation and Timeline
Given an open dispute When a Support Agent adds an annotation Then the system appends a time-stamped, immutable note with agent_id, visibility (internal|clinic|patient), and optional attachment; existing notes cannot be edited, only appended with corrections Given annotations exist When a Clinician or Patient views the timeline Then they see only notes with visibility matching their role, in reverse-chronological order with relative and absolute timestamps Given an annotation is added When notifications are configured for the dispute Then subscribers matching the note's visibility receive notifications within 60 seconds

Pause & Prorate

Life happens—surgery delays, travel, flares. Pause an episode with one tap, automatically extend end dates, and prorate fees or credits through Stripe. Resume picks up entitlements (visits, telehealth check‑ins) where they left off. Patients feel treated fairly, and staff avoid manual recalculations and refund gymnastics.

Requirements

One-Tap Pause & Resume Controls
"As a patient, I want to pause my rehab program with one tap so that I can handle interruptions without losing my remaining visits or being overcharged."
Description

Provide in-app controls for patients and staff to pause an episode instantly or schedule a pause with a single tap/click. Capture reason codes (e.g., surgery delay, travel, flare), effective date/time, and pause duration with validation against plan eligibility, outstanding invoices, and clinic policies. Show a confirmation with a live preview of timeline changes and billing/proration impact before committing. Ensure accessibility, localization, and offline support (queue and sync on reconnect). Enforce concurrency safeguards to prevent duplicate or overlapping pauses. On resume, automatically reinstate access and surface a summary of remaining entitlements. Integrate with scheduling to reflect paused state on calendars and prevent booking during pauses. Outcome: a frictionless, low-support workflow that avoids manual intervention.

Acceptance Criteria
Immediate One‑Tap Pause with Confirmation Preview
Given a user (patient or staff) views an active episode with no overlapping pause at the selected time And a reason code is selected and a start date/time (default now) and duration are provided And plan eligibility and outstanding invoice checks return no blocking errors When the user taps Pause Then a confirmation preview modal appears showing pause start/end timestamps, total extension to the episode end date, and the estimated billing/proration impact When the user confirms Then the episode status changes to Paused within 1 second and the pause (start/end, reason, user ID) is persisted server‑side with an audit entry And the UI updates to show Paused state and next key dates
Scheduled Pause with Eligibility, Policy, and Overlap Validation
Given a user schedules a pause for a future start When they enter start date/time and duration Then the system validates: pause length within clinic min/max, number of pauses per episode within policy, effective time normalized to UTC using clinic timezone, no overlap with existing or queued pauses, plan eligibility permits pause, and outstanding invoices do not block or require authorized override And any validation failures display inline errors and disable confirmation until resolved When validation passes and the user confirms Then a scheduled pause record is created, visible on the episode timeline and calendar, and cannot overlap with any other pause And idempotency prevents duplicate creation on repeated taps or retries
Proration and Credits via Stripe on Pause/Resume
Given Stripe is connected for the clinic and the episode has a subscription/package When a pause (immediate or scheduled) is confirmed Then a Stripe proration/adjustment request is sent with an idempotency key unique to the episode and pause window And charges/credits are computed per clinic policy and reflected on the next invoice or as an immediate credit note And the previewed amount matches the final Stripe adjustment within ±$0.01 And no duplicate charges/credits occur on network retries If the Stripe call fails Then the pause/resume is not committed, the user sees a blocking error with retry, and no local state is finalized When a scheduled pause is canceled before start or a pause is resumed early Then a reversing proration adjustment is issued and reflected within 60 seconds
One‑Tap Resume with Entitlements Summary
Given an episode is in Paused state When the user taps Resume and confirms Then access to exercises, telehealth check‑ins, and messaging is reinstated within 1 second And the remaining entitlements summary displays current counts for visits, telehealth check‑ins, and program days remaining And the episode end date reflects the total pause extension And scheduling is reopened from the resume time forward And an audit entry records before/after entitlement counts and timestamps
Offline Pause/Resume Queued and Synced
Given the device is offline or the network request times out When the user completes the Pause or Resume form and confirms Then the request is stored locally with a unique idempotency key and marked Pending Sync And the UI shows a Pending Sync banner and prevents duplicate submissions When connectivity is restored Then the request is sent automatically, applying the server response to update state, or a conflict dialog is shown if overlap/policy violations occur And if not synced within 24 hours, the user is prompted to retry or discard with no partial server changes applied
Accessibility and Localization Compliance
Given a user navigates with keyboard or screen reader When interacting with Pause/Resume controls and confirmation preview Then all controls are keyboard‑operable with visible focus, correct ARIA roles/labels, and live region announcements on state changes And color contrast is ≥ 4.5:1, touch targets are ≥ 44×44 px, and text scales to 200% without loss of content or functionality Given the app is set to a supported locale When viewing reasons, dates/times, and billing amounts Then all strings are localized, reason codes are translated, dates/times use locale formatting with clinic timezone context, and currency is formatted per locale
Scheduling Integration and Booking Prevention
Given an episode with a scheduled or active pause When viewing patient and staff calendars Then the paused interval is displayed as a blocked span with the reason tooltip And existing bookings inside the paused interval are flagged for reschedule and cannot be checked in without authorized override When attempting to book within a paused interval Then the system blocks creation with an explanatory error and suggests the next available times When a pause is resumed early, canceled, or dates are changed Then calendars and ICS feeds reflect changes within 60 seconds and affected parties receive notifications
Automatic Episode Extension & Milestone Recalculation
"As a clinician, I want episode dates and check-ins to automatically shift when a patient pauses so that I don’t have to manually adjust schedules."
Description

Automatically extend the episode end date and shift all dependent timelines (care-plan milestones, check-ins, nudges, and telehealth windows) by the exact pause duration. Support multiple pause segments, maintaining cumulative offsets and ensuring no overlap with clinic closure days or provider unavailability. Recompute recurring schedules, adjust reminders, and update patient/clinician calendars in real time. Display paused periods as gaps in progress charts and analytics while preserving historical accuracy. Handle time zone nuances and rounding rules (to day/hour, as configured). Outcome: accurate schedules and timelines without manual recalculation by staff.

Acceptance Criteria
Single Pause: Extend Episode and Shift Dependent Timelines
Given an active episode with future milestones, check-ins, nudges, and telehealth windows And rounding is configured to day or hour When a pause is applied from T_start to T_end Then the episode end date/time is extended by exactly the configured-rounded pause duration And every not-yet-started milestone, check-in, nudge, and telehealth window is shifted forward by the same offset, preserving each original duration and relative order And items with start times before T_start remain unchanged
Multiple Pauses: Cumulative Offset with Merge
Given an episode with one or more existing pause segments When an additional pause segment is added Then the total offset applied equals the sum of configured-rounded durations of all non-overlapping pauses And overlapping or contiguous pauses are merged into a single segment before duration calculation And recalculation is idempotent so items are not double-shifted by successive saves of the same pauses
Avoid Closures and Unavailability on Shift
Given clinic closure days and provider unavailability blocks are configured When items are shifted due to a pause Then no resulting item starts or ends within a closure day or unavailability block And items landing on blocked periods move to the next available slot respecting provider working hours, existing appointments, and minimum buffers And recurring items maintain their original cadence (e.g., weekly, daily) relative to the new anchor date
Real-Time Calendar and Reminder Rescheduling
Given a pause or resume is saved When the schedule is recalculated Then patient and clinician in-app calendars reflect the updated schedule within 10 seconds And all not-yet-sent reminders whose trigger times changed are rescheduled to the new times And reminders that would have fired during a paused period are canceled and replaced with the next valid reminder time And no duplicate reminders are sent for the same event
Charts Show Paused Gaps; Metrics Exclude Paused Time
Given one or more pause segments within the episode When viewing progress charts and analytics Then paused periods are rendered as visually distinct gaps labeled with pause start and end And adherence/engagement metrics exclude paused days or hours from denominators while preserving recorded events unchanged And historical values for dates prior to the pause remain numerically identical after recalculation
Time Zone and DST-Safe Duration and Display
Given the patient and clinic may be in different time zones and a pause may span a DST change When calculating pause duration and shifting schedules Then the offset is based on absolute elapsed time between pause start and end And rounding rules are applied per configuration (to hour or to day) including specified tie-breakers And all dates/times display in the viewer's local time without off-by-one-day errors
Resume Preserves Entitlements and Recurring Cadence
Given an episode was paused with remaining entitlements and recurring schedules When the episode is resumed Then the counts of remaining entitlements (visits, telehealth check-ins) are unchanged And the next occurrences are scheduled from the resume time using the shifted timeline and original cadence And the refreshed schedule contains no duplicate or skipped occurrences
Entitlement Freeze & Carryover
"As an admin, I want entitlements to freeze during a pause so that patients keep what they’ve paid for and limits remain accurate."
Description

Snapshot and freeze all active entitlements at the moment a pause begins (e.g., in-person visits, telehealth check-ins, messaging quotas). Prevent consumption during the paused period while maintaining access to non-consumptive content as configured. On resume, restore exact remaining quantities and extend any expirations by the pause duration. Enforce entitlements at the API level with idempotent operations to avoid double-counting. Surface real-time entitlement state in patient and staff views, and persist changes in a robust data model resilient to concurrent updates. Outcome: fair, consistent treatment of purchased benefits and accurate limits.

Acceptance Criteria
Pause Start: Entitlement Snapshot Freeze
Given an active episode with entitlements (remaining quantities and expirations) When a pause is initiated (via API or UI) Then an immutable snapshot of all entitlements is stored with exact remaining quantities, original expirations, and pause_start in UTC And the episode state becomes "paused" And the snapshot is retrievable via API And no entitlement balance changes at pause time.
Paused State: Prevent Entitlement Consumption
Given an episode is paused When any consumptive action is requested (visit booking, telehealth check-in, counted message) with timestamp >= pause_start Then the API rejects the request with 409 ENTITLEMENT_PAUSED and no decrement occurs And a blocked_consumption audit event is recorded And non-consumptive content remains accessible per configuration.
Resume: Restore Remaining Quantities and Extend Expirations
Given a paused episode with snapshot S and pause period [pause_start, resume_time] When resume is executed Then remaining quantities equal S.remaining for each entitlement And all entitlement expirations and the episode end date are extended by (resume_time - pause_start) to the second And consumptive actions are again permitted and decrement balances normally And a resume_applied audit event is recorded.
API Idempotency: Safe Re-tries and Duplicate Calls
Given the client supplies an Idempotency-Key for pause/resume When the same request is retried Then the response status and body match the first call and no additional state changes occur And duplicate consumption requests with the same operation_id decrement at most once And duplicate requests without idempotency keys return 409 DUPLICATE_OPERATION with no state change.
Concurrency: Simultaneous Pause/Resume and Consumption Races
Given multiple pause/resume requests or consumption requests arrive concurrently for the same episode When processed Then at most one state transition (pause or resume) is committed and others return 409 CONFLICT And the event log contains exactly one corresponding event And any consumption with effective timestamp >= pause_start is rejected while paused; < pause_start is honored And the stored version increments only for the committed transition.
UI Parity: Real-time Patient and Staff Views
Given an episode changes state (pause or resume) When observed in patient and staff apps Then both views reflect the change within ≤5 seconds And patient-facing consumptive actions are disabled during pause with a visible Paused badge And staff dashboard shows snapshot S, pause duration, and new projected end date And displayed balances/expirations exactly match API values and include a last_updated timestamp ≤5 seconds old.
Audit & Persistence: Event Log and Recovery
Given pause/resume/consumption events occur When the service restarts or a failure happens mid-operation Then recovered entitlement state equals the state derived by replaying the event log (no loss or duplication) And events are append-only with unique IDs and increasing sequence numbers And the API exposes an ordered timeline including snapshots, pauses, resumes, and blocked consumptions And recomputation from history matches live balances exactly.
Billing Proration & Stripe Integration
"As a clinic owner, I want fees to be prorated automatically through Stripe when a plan is paused or resumed so that billing stays fair and hands-off."
Description

Implement a billing engine that calculates and applies prorated charges, credits, or refunds when episodes are paused/resumed. Support Stripe subscriptions, prepaid packages, and add-ons with proper tax, discounts, and multi-currency handling. Generate credit notes, apply credits to next invoices, or issue immediate refunds per clinic policy. Provide a pre-commit billing preview in the UI, then execute via Stripe APIs with webhook-driven reconciliation, retries/backoff, and idempotency keys. Record ledger entries and expose finance-safe exports for accounting. Handle mid-cycle changes, trial periods, and promotional codes. Outcome: correct, automated financial adjustments with minimal staff effort.

Acceptance Criteria
Pre-commit Billing Preview for Pause/Resume
Given a patient episode with active billing (subscription or prepaid package) and optional add-ons and a clinic policy for refund vs credit When a staff user configures a pause or resume with an effective date/time and requests a billing preview Then the preview shows itemized proration amounts for subscription and add-ons, taxes, discounts, and currency rounding And the preview displays adjusted end date/term extension and expected next invoice date And the preview total equals the amount returned by Stripe’s upcoming invoice/proration preview for the same inputs within 1 unit of minor currency And the Commit action is disabled until the preview successfully loads
Mid-Cycle Pause: Subscription Proration and Term Extension
Given an active Stripe subscription mid-cycle with applicable tax and a percentage discount in the customer’s currency When the episode is paused effective immediately Then Stripe receives an idempotent request preventing duplicate prorations on retries And no further usage/add-on charges accrue during the paused period And a prorated credit for the unused portion of the current billing period is created per clinic policy (negative invoice item for credit-on-next-invoice or immediate refund) And taxes prorate proportionally and discounts remain applied And the subscription’s current period end is extended by the paused duration And all amounts are rounded per the customer’s currency rules
Resume Episode: Entitlements and Billing Continuation
Given an episode previously paused with remaining entitlements (visits and telehealth check-ins) and an associated subscription with a promotional code or trial When the episode is resumed Then entitlements are restored exactly as of the pause moment and expiration is extended by the paused duration And the subscription reactivates at the prior price, discounts, and tax rates without charging for the paused period And the next invoice date is shifted forward by the paused duration And any remaining trial days are preserved and no additional promotional discount is applied beyond the original terms
Policy-Driven Refund vs Credit with Webhook Reconciliation
Given clinic policy indicates immediate refund or credit-on-next-invoice for proration outcomes When a pause creates a positive balance owed to the patient Then the system either issues an immediate Stripe refund to the original payment method or creates a Stripe credit note and customer balance to be applied to the next invoice And a customer-facing receipt/credit note document is generated and accessible in the patient portal And webhook events (e.g., invoice.updated, credit_note.created, charge.refunded) are processed idempotently to update internal ledger entries to Settled, with exponential retry/backoff up to 72 hours on failures And duplicate webhook deliveries do not create duplicate ledger entries or duplicate refunds
Prepaid Packages and Add-ons Proration
Given a prepaid package with N total sessions, S sessions remaining, and one or more add-ons billed upfront When the episode is paused Then the package expiration date is extended by the paused duration And the refundable/creditable value equals (S/N) of the package net price (after discounts) with taxes prorated proportionally per jurisdiction And add-ons with remaining unused quantity are prorated using the same methodology And if policy selects immediate refund, a Stripe refund is issued for the calculated amount; if policy selects credit, a Stripe credit note is created and applied to the next invoice
Ledger and Finance-Safe Export Integrity
Given a proration/refund/credit operation completes via Stripe and webhooks When reconciliation finishes Then the internal ledger records balanced debit/credit entries with fields: patient ID, clinic ID, currency, gross, tax, discount, net, exchange rate source/time (if FX), Stripe object IDs (invoice, credit note, charge, refund), policy, initiator, timestamps And the finance export (CSV) for a date range can be generated and matches Stripe balance transaction totals within 1 unit of minor currency per currency And any reconciliation mismatches raise an alert and the export is flagged as incomplete until resolved
Notifications & Reminders for Pause Lifecycle
"As a patient, I want clear notifications about my pause and upcoming resume so that I know what to expect with scheduling and billing."
Description

Deliver configurable notifications at key events: pause scheduled, pause started, upcoming resume reminder, resume completed, and any billing adjustments. Support in-app, email, and SMS (where permitted), with localized content and clinic branding. Include clear summaries of new end dates, shifted check-ins, and entitlement status. Allow patients to snooze or extend a scheduled resume within policy limits, with immediate recalculation of timelines and billing previews. Ensure reliable delivery with queuing, retries, and failure alerts. Outcome: transparent communication that reduces uncertainty and support tickets.

Acceptance Criteria
Pause Scheduled: Multichannel Notification with Policy and Branding
Given a clinic user schedules a pause for a patient with defined start and resume dates and the patient has saved communication preferences When the pause is saved Then the system queues in-app, email, and (if consented and permitted) SMS notifications within 60 seconds And the content is localized to the patient’s preferred locale and timezone and includes clinic name/logo And the message includes pause start, scheduled resume date, recalculated program end date, and a summary of entitlements placed on hold And a Manage Pause link is present in all channels (deep link in-app) And if SMS consent is missing or region prohibits SMS, only permitted channels are sent and this is logged without error to the patient And an audit log entry records notification IDs, channels, and delivery intent
Pause Start: Real-time Confirmation and Timeline Recalculation
Given the scheduled pause start time is reached or a staff member activates the pause immediately When the pause state changes to Active Then the system recalculates the program end date by the pause duration and freezes entitlements And future check-ins/visits are shifted forward by the pause duration while preserving original weekday/time windows where possible And an in-app banner and email (and SMS if permitted) confirm the active pause within 60 seconds And the confirmation includes updated end date, first shifted check-in/visit date, and entitlements on hold And the patient and clinic dashboards reflect status = Paused within 30 seconds And all actions are captured in an audit log with before/after values
Upcoming Resume Reminder with Patient Snooze/Extend Within Policy
Given a scheduled resume date exists and reminder lead time is configured (default 48 hours) When the reminder window starts Then the patient receives in-app, email, and (if permitted) SMS reminders localized and branded And the reminder offers Snooze and Extend options constrained by clinic policy limits (e.g., max additional days, frequency) And if the patient selects Snooze/Extend within limits Then the system immediately recalculates resume date, program end date, and shifts future check-ins accordingly And a real-time billing preview shows any proration impact before confirmation And upon confirmation, a success notification is sent and the audit log records the change And if the request exceeds policy limits, a clear validation error explains the remaining options and no changes are applied
Resume Completed: Entitlements Restored and Summary Notice
Given a pause has an effective resume date When the resume time is reached or staff resumes early Then entitlements unfreeze and remaining counts are restored And future check-ins/visits are re-activated with dates/times visible to the patient And in-app, email, and (if permitted) SMS notifications are sent within 60 seconds summarizing next check-in date/time and updated program end date And the patient dashboard status changes to Active within 30 seconds And an audit log records the resume event and final schedule
Billing Adjustments and Stripe Proration Notices
Given a pause, snooze, or extend action that affects billing When the patient or staff initiates the change Then a billing preview displays proration/credit/debit amounts computed via Stripe before confirmation And changes are only applied after explicit confirmation And upon Stripe charge/refund/credit success (via webhook), the patient receives a notification summarizing amount, reason, and covered period; the clinic receives a copy And invoice/credit note IDs are stored and linked to the episode And idempotency keys prevent duplicate charges on retries And if Stripe is unavailable, the action is queued, the patient is informed that billing is pending, and staff receive an alert; the system retries with exponential backoff until success or manual intervention
Delivery Reliability, Queueing, Retries, and Failure Alerts
Given any pause-lifecycle notification is to be sent When the notification is published Then it is enqueued with a unique idempotency key per event/channel and persisted for retry And the system retries failed deliveries with exponential backoff up to 5 attempts per channel And delivery status is tracked per channel (Queued, Sent, Delivered, Failed) and visible to staff And hard bounces/unsubscribes are honored for email; DNC/opt-out is honored for SMS And duplicate deliveries are prevented within a 10-minute deduplication window per idempotency key And 99% of notifications are attempted within 2 minutes of trigger And failures after max retries create a staff alert with error details and patient/context links
Localization and Clinic Branding Consistency Across Channels
Given notification templates exist for supported locales and clinics have branding assets When a notification is rendered Then locale selection follows: patient preference > clinic default > app default And all dates/times are formatted in the patient’s timezone and locale format And clinic logo and brand colors apply to email/in-app; SMS uses a branded sender name where permitted And if a translation or branding asset is missing, the system falls back to app defaults and logs a warning And templates validate required placeholders at build time; missing placeholders fail CI and block deploy And spot-check test messages can be generated per locale/clinic from the admin UI
Audit Logging, Policy Controls, and Safeguards
"As a compliance manager, I want detailed audit logs and configurable pause policies so that we meet clinic rules and regulatory requirements."
Description

Maintain an immutable audit log of all pause/resume actions with timestamps, actor (patient/staff/system), reason codes, previous and new dates, entitlement snapshots, and financial transactions. Provide clinic-level policy settings for maximum pause length, allowed frequency, backdating windows, and appointment-conflict behavior. Enforce guardrails preventing overlaps, backdating beyond policy, or pausing during active sessions; allow admin overrides with mandatory justification. Offer exportable reports for compliance and finance, and surface event history in patient and episode timelines. Outcome: strong governance, traceability, and risk mitigation.

Acceptance Criteria

CPT Bridge

Turn episode activity into clean invoice lines. Automatically map visits, telehealth touchpoints, and remote‑monitoring time to CPT‑like codes and modifiers using clinic‑tunable rules (time thresholds, unit rounding, payer nuances). Export as CSV, FHIR Claim, or 837P‑lite to your EHR or billing service, with links to evidence snippets and a tamper‑evident audit trail—fewer rejections, faster payments.

Requirements

Rule-based CPT Mapping Engine
"As a clinic billing admin, I want episode activities to be automatically mapped to the correct CPT codes, modifiers, and units using configurable rules so that invoices are accurate and consistent across payers."
Description

Deterministically translates MoveMate episode activities (in-person visits, telehealth touchpoints, and remote‑monitoring time) into CPT-like codes, appropriate modifiers, units, place-of-service, and provider identifiers via a clinic‑tunable rules engine. Supports time thresholds (e.g., 8‑minute rule), unit rounding, code bundling/exclusions, and payer-specific overrides with effective dating and versioning. Produces line-level rationale (inputs, rule matched, calculation steps) and conflict resolution precedence. Integrates with patient episodes and clinician schedules, runs nightly and on-demand, and scales to multi-clinic deployments. Exposes APIs for generation, preview, and re-generation after rule updates.

Acceptance Criteria
Time Aggregation & Attribution Service
"As a clinician, I want telehealth and remote monitoring time to be automatically aggregated and attributed per code rules so that I can bill compliant units without manual timekeeping."
Description

Aggregates and normalizes time from MoveMate signals (telehealth calls, asynchronous reviews, patient messaging, exercise supervision, device-free CV sessions) and attributes it to billable codes per rule definitions. Handles overlapping sessions, inactivity thresholds, clinician multitasking limits, daily vs. monthly accumulators (e.g., 99457/99458), timezone normalization, and patient-level episode boundaries. Applies clinic-configured rounding and minimums, maintains immutable time ledgers, and surfaces per-patient accrual progress and remaining billable capacity.

Acceptance Criteria
Payer Profile Rules Library
"As a billing specialist, I want payer-specific rule profiles I can clone and tune so that claims reflect each payer’s nuances and avoid preventable denials."
Description

Maintains a library of payer-specific rule profiles capturing nuances such as covered codes, modifier requirements (e.g., 95/GT, GP), place-of-service mapping (02/10/11), diagnosis pointer constraints, unit maximums per day/month, rounding behavior, documentation requirements, and bundling/CCI edits. Allows cloning of base templates, effective-dated overrides, environment scoping (clinic, location), and patient‑to‑payer assignment. Includes change history and impact analysis on affected patients/claims.

Acceptance Criteria
Evidence Links & Tamper‑Evident Audit Trail
"As a compliance officer, I want each claim line to link to verifiable evidence and an immutable audit trail so that we can defend claims and pass audits."
Description

Attaches verifiable evidence to each generated invoice line, including references to activity logs, rep counts, and video/frame timestamps, along with the applied rule and calculation rationale. Generates cryptographic hashes for evidence packages, records write-once audit entries for all transformations, and supports digital signatures to ensure tamper‑evidence. Provides scoped access controls, retention policies, and export of an evidence bundle for payer audits or compliance reviews.

Acceptance Criteria
Multi‑format Claim Export & Delivery
"As a revenue cycle manager, I want to export claims in CSV, FHIR Claim, or 837P‑lite and deliver them securely so that we can integrate with different EHRs and billing services."
Description

Exports billable lines as configurable CSV, FHIR Claim (R4/R4B) with modifiers and extensions, and 837P‑lite X12 segments. Performs schema validation and code set normalization, maps provider/patient identifiers, and includes payer-specific header values. Supports batching, scheduling, idempotent job handling, resumable transfers, and delivery via secure download, SFTP, or HTTPS webhook. Generates delivery receipts and tracking IDs that link back to source episodes and evidence.

Acceptance Criteria
Pre‑submission Scrubber & QA Dashboard
"As a biller, I want a pre-submission scrubber that flags issues and suggests fixes so that we reduce rejections and speed up payments."
Description

Validates generated lines before export, checking required fields, code‑modifier compatibility, payer-specific edits, NCCI bundling, unit and frequency limits, place‑of‑service consistency, and documentation sufficiency. Displays errors and warnings with actionable fixes (e.g., add modifier, adjust units, attach note), supports bulk remediation, and re-runs validation in real time. Provides metrics on rejection risk and estimated financial impact to prioritize corrections.

Acceptance Criteria
Rules Editor & Simulation
"As a clinic admin, I want a rules editor with simulation and versioning so that I can safely update billing logic and understand the impact before deploying."
Description

Offers a secure UI to create, edit, and version rules using a form-driven editor with advanced DSL mode. Includes draft/publish workflows, approvals, and change logs. Provides simulation against historical episodes to preview resulting codes, units, modifiers, and revenue impact before deployment, with side-by-side diffs and rollback. Enforces role-based access control and supports multi-clinic scoping.

Acceptance Criteria

Budget Predictor

Forecast costs and margins before you enroll. Compare a la carte vs bundle, preview payment schedules, and simulate adherence scenarios to see when the episode breaks even. Patient‑facing view clarifies total out‑of‑pocket and due dates; clinic view estimates revenue, fees, and risk flags with suggestions (e.g., add a telehealth check‑in) to keep outcomes and budgets on track.

Requirements

Configurable Pricing Engine
"As a clinic owner, I want to configure our rates, bundles, and fees and compare a la carte vs bundle so that I can accurately forecast revenue and margins before enrolling a patient."
Description

Implements a flexible pricing and cost model that supports a la carte sessions, bundled packages, telehealth check-ins, discounts, and clinic- or payer-specific fees. Allows clinics to define defaults (per-session rates, bundle prices, platform/payment fees, optional taxes) and override at the patient-episode level. Ingests planned sessions from the plan of care and scheduled telehealth touchpoints to compute forecasted revenue, costs, and margins. Provides side-by-side comparison of a la carte vs bundle with real-time recalculation as inputs change. Integrates with MoveMate’s plan and scheduling data so forecasts stay current when visit counts, cadence, or telehealth add-ons change.

Acceptance Criteria
Dual-View Budgeting (Patient & Clinic)
"As a patient, I want a clear, simple breakdown of my total cost and when I’ll be charged so that I can budget and decide between package or pay-as-you-go."
Description

Delivers role-based views that present the same forecast using audience-appropriate language and metrics. The patient view shows total out-of-pocket, per-installment amounts, and due dates with clear assumptions and disclaimers. The clinic view shows expected revenue, platform/payment fees, gross margin, and cash flow timing. Supports secure sharing via link or in-app message, with permissions enforcing that patients only see their costs and schedule while clinics see full financial breakdowns. Ensures accessibility (readable amounts, color contrast) and mobile responsiveness to align with MoveMate’s lightweight app experience.

Acceptance Criteria
Adherence Scenario Simulator
"As a clinician, I want to simulate different adherence levels and add telehealth check-ins so that I can see how outcomes and economics change before finalizing the care plan."
Description

Enables what-if modeling of adherence and attendance variables (completion rate, no-shows, early discharge, additional telehealth check-ins) to visualize impact on delivered sessions, revenue, and margins. Provides presets based on de-identified historical adherence patterns from MoveMate cohorts and allows manual adjustments via sliders. Updates forecasts instantly and annotates key drivers of change. Saves named scenarios for later comparison and links them to the episode record for auditability.

Acceptance Criteria
Break-even & Margin Calculator
"As a clinic owner, I want to know when an episode breaks even and how sensitive it is to adherence and fees so that I can adjust pricing or care cadence proactively."
Description

Calculates break-even by session and by calendar date, showing where cumulative revenue surpasses cumulative costs under selected scenarios. Displays margin bands (best/likely/worst) and sensitivity to key inputs (price, adherence, fees). Highlights if break-even is unlikely within the planned episode and surfaces the minimum changes required to achieve it. Embeds visual indicators on the forecast timeline and flags when changes to plan or pricing push the episode below target margins.

Acceptance Criteria
Payment Schedule Generator & Notifications
"As a patient, I want an upfront payment schedule with reminders tied to my visit plan so that I never miss a payment and can plan my budget."
Description

Generates a patient-facing payment schedule aligned to planned visits and bundles, including deposits, installments, and due dates. Supports rescheduling logic that automatically shifts upcoming due dates when the visit plan changes and records a change log. Offers export to PDF/email and in-app delivery, plus opt-in reminders via push/SMS/email before each due date. Integrates with MoveMate notifications and calendar to keep patients informed and reduce missed payments.

Acceptance Criteria
Risk Flags & Smart Recommendations
"As a clinician, I want proactive risk flags with specific, quantified suggestions so that I can keep both patient outcomes and the clinic’s budget on track."
Description

Detects financial risk conditions (e.g., negative margin at likely adherence, delayed break-even, high no-show probability) and proposes corrective actions such as switching to a bundle, adding a mid-episode telehealth check-in, adjusting cadence, or offering a payment plan. Quantifies expected impact on outcomes and margin for each suggestion and allows one-click application to update the forecast. Captures clinician decisions and outcomes to improve future recommendations using de-identified results.

Acceptance Criteria

Unit Guard

Keep episodes within authorized limits. Sync approved units or visit counts, show what’s remaining in real time, and warn when a scheduled visit would exceed coverage. Offer compliant alternatives (telehealth vs in‑person, split visit) and auto‑attach justifications to the episode ledger and export. Clinics stay audit‑ready; coordinators avoid last‑minute scrambles and denials.

Requirements

Authorization & Visit Limit Sync
"As a clinic coordinator, I want approved units and visit limits to auto-sync into each episode so that scheduling and documentation always reflect the latest payer authorizations without manual reconciliation."
Description

Ingest and synchronize payer-approved units and visit caps for each episode from external sources (payer portals, EHR, CSV upload) and manual entry. Map authorizations to CPT/HCPCS codes, modifiers, modality (in‑person vs telehealth), and date windows, supporting multiple concurrent authorizations per episode, renewals, and overlaps. Normalize into a unified coverage model that MoveMate can reference in real time. Handle incremental updates, revocations, and expirations with full change history. Provide validation (e.g., mismatched codes, expired dates), duplicate detection, and admin tools to resolve conflicts. Expose a lightweight API and webhooks so clinics can automate updates from their RCM/EHR. This foundation enables accurate remaining-unit calculations and compliant scheduling across the product.

Acceptance Criteria
Real-time Coverage Counter
"As a physical therapist, I want to see exactly how many units and visits remain for a patient in real time so that I can plan treatment without risking denials."
Description

Compute and display remaining authorized units/visits in real time across the episode dashboard, patient profile, and scheduling flow. Break down remaining amounts by code/category (e.g., CPT, modality) and by authorization window. Account for pending, scheduled, completed, and canceled visits with correct unit consumption rules and rounding. Update instantly on schedule changes and visit documentation submission. Provide role-based, at-a-glance indicators (e.g., green/amber/red) and tooltips explaining calculation details for audit clarity. Ensure concurrency safety to avoid double-counting during simultaneous edits and support offline-to-online reconciliation on mobile.

Acceptance Criteria
Scheduling Guardrails & Overage Warnings
"As a scheduler, I want the system to warn or block visits that would exceed coverage so that I don’t book non-compliant appointments and cause claim denials."
Description

Intercept scheduling actions to prevent or warn when a planned visit would exceed authorized limits or violate frequency rules. Provide clear, inline warnings with the deficit, affected codes, and policy window. Allow configurable behaviors: hard block, soft warn with required acknowledgment, or require supervisor override with justification. Support partial scheduling (split visits), shorter durations, or code substitutions within policy. Integrate with the calendar, telehealth booking, and episode plan so clinicians see compliant options before confirming. Log all decisions and overrides to the episode ledger.

Acceptance Criteria
Compliant Alternatives Recommendation
"As a clinic coordinator, I want the system to suggest compliant alternatives when a visit would exceed limits so that I can quickly rebook without back-and-forth or policy guesswork."
Description

When coverage is insufficient, generate compliant alternatives such as switching to telehealth (if allowed), splitting the visit across dates, adjusting duration/units, or shifting codes within authorization. Use the rules engine and current utilization to present 2–3 best options with reasons, unit impact, and earliest compliant dates. Respect patient preferences, therapist availability, and clinic hours. Allow one-click apply to reschedule or modify the visit plan. Capture the selected alternative and rationale for audit and care continuity.

Acceptance Criteria
Audit Trail & Auto-Justifications
"As a compliance officer, I want all coverage decisions and overrides to be automatically documented with justifications so that we remain audit-ready without extra manual work."
Description

Automatically generate structured ledger entries for all coverage-related events: syncs, schedule changes, overage warnings, overrides, and alternative selections. Attach payer-required reason codes, internal notes, and supporting artifacts (e.g., medical necessity statements). Ensure entries are time-stamped, user-attributed, immutable, and versioned with before/after values. Surface ledger snippets contextually (e.g., next to a scheduled visit) and expose full history for audits. Ensure data retention aligns with compliance policies and is included in exports.

Acceptance Criteria
Compliance Exports & Sharing
"As a billing administrator, I want one-click exports of coverage usage and justifications so that I can respond to payer audits quickly and consistently."
Description

Provide export of episode coverage utilization, ledger, and supporting justifications to CSV and PDF with clinic branding and time-stamped summaries. Offer patient- and episode-scoped exports and bulk exports by date range or payer for audits. Include filters (e.g., only overrides, only telehealth) and redact PHI fields as configured. Support secure sharing via expiring links and a lightweight API endpoint for RCM/EHR ingestion. Ensure formatting aligns with payer audit expectations and includes calculation explanations.

Acceptance Criteria
Coverage Policy Rules Engine
"As an operations lead, I want a transparent rules engine for payer coverage policies so that our scheduling and documentation stay compliant even as rules change."
Description

Introduce a configurable rules engine to encode payer policies, including unit rounding, per-visit and per-week caps, authorization windows, telehealth allowances/modifiers, split-visit permissions, and code-specific exceptions. Support plan-level overrides, effective-dated versions, and test cases with a sandbox runner. Provide an admin UI for clinics to add/edit policies, import templates, and preview effects on a sample episode. Expose rule decisions with human-readable explanations to power warnings, counters, and recommendations consistently across the product.

Acceptance Criteria

Product Ideas

Innovative concepts that could enhance this product's value proposition.

SnapCode Onboarding

Patients scan a clinic QR or tap a short link to auto-load their program, verify with DOB, and start. Cuts setup to under 60 seconds and reduces intake errors.

Idea

Care Circle Permissions

Role-based access for PTs, PTAs, caregivers, and payers with fine-grained controls and audit logs. Share form flags without PHI leakage while enabling caregiver coaching.

Idea

Fix-My-Rep Coach

Real-time micro-cues overlay—arrows, timing pips, and brief haptics—trigger when form flags fire, plus a 5‑second replay showing the exact mistake. Cuts re-injury risk.

Idea

Offline Rep Vault

Capture reps, form flags, and video snippets fully offline with battery-aware CV; sync securely when connectivity returns. Perfect for rural routes and basement gyms.

Idea

Protocol A/B Lab

Build two exercise templates, randomize patients, and compare adherence and recovery metrics with clear winner calls. Standardize faster with evidence from your own clinic.

Idea

Payer Proof Pack

One-click, timestamped adherence and form-quality report with variance charts and exception notes, exportable as PDF or FHIR bundle. Speeds authorizations and clean closures.

Idea

Episode Bundle Billing

Offer per-episode pricing with built-in Stripe subscriptions and invoice exports mapped to CPT-like codes. Simplifies budgeting for small clinics and aligns payment to outcomes.

Idea

Press Coverage

Imagined press coverage for this groundbreaking product concept.

Want More Amazing Product Ideas?

Subscribe to receive a fresh, AI-generated product idea in your inbox every day. It's completely free, and you might just discover your next big thing!

Product team collaborating

Transform ideas into products

Full.CX effortlessly brings product visions to life.

This product was entirely generated using our AI and advanced algorithms. When you upgrade, you'll gain access to detailed product requirements, user personas, and feature specifications just like what you see below.