Home health scheduling

CarePulse

Never miss a compliant visit

CarePulse is a lightweight, mobile-first SaaS that centralizes scheduling, documentation, and compliance reporting for operations managers and caregivers at small home-health agencies, syncing live routes, auto-populating visit notes from short voice clips and optional IoT sensors, and generating one-click, audit-ready reports to halve documentation time and ensure on-time, compliant visits.

Subscribe to get amazing product ideas like this one delivered daily to your inbox!

CarePulse

Product Details

Explore this AI-generated product idea in detail. Each aspect has been thoughtfully created to inspire your next venture.

Vision & Mission

Vision
To empower home health teams to deliver on-time, compliant, personalized care by eliminating scheduling friction and automating documentation.
Long Term Goal
Within 4 years, empower 1,000 home-health agencies and 10,000 caregivers to cut documentation time 50%, reduce missed visits 60%, and achieve 95% audit pass rates.
Impact
CarePulse reduces scheduling conflicts by 80% and missed visits by 60% for operations managers and caregivers at small home-health agencies, cutting documentation time 50%, saving nurses two hours weekly, and improving audit pass rates within 90 days.

Problem & Solution

Problem Statement
Operations managers and caregivers at small home health agencies struggle with paper rosters, missed visits, inconsistent notes, and onerous compliance because EHRs and spreadsheets don’t auto-generate field reports or ingest voice clips and sensor evidence.
Solution Overview
CarePulse centralizes mobile schedules and automates documentation to prevent missed visits and simplify audits: a live schedule sync keeps caregivers on-route while short voice clips auto-populate visit notes into one-click, audit-ready reports.

Details & Audience

Description
CarePulse is a lightweight SaaS that centralizes scheduling, documentation, and compliance reporting for in-home healthcare teams. It serves operations managers and caregivers at small to medium home health agencies juggling multiple clients. CarePulse eliminates scheduling conflicts, cuts documentation time in half, and produces audit-ready reports to ensure on-time, compliant visits. A smart shift-checklist auto-populates visit notes from short voice clips and optional IoT sensors for one-click reporting.
Target Audience
Operations managers and caregivers (25-60) at small home-health agencies needing on-time, compliant visits; mobile-first adopters.
Inspiration
On a midnight weekend on-call, a visiting nurse arrived at the wrong house because the paper roster listed an outdated address. She spent two exhausted hours reconstructing visit notes and calling supervisors to prove compliance. That desperate night sparked a solution: a nimble mobile scheduler that syncs real-time routes, auto-captures voice and sensor evidence, and produces instant, audit-ready visit reports.

User Personas

Detailed profiles of the target users who would benefit most from this product.

I

Intake Integrator Isla

- 29–38; Intake Coordinator at 25–60 staff agency - Phoenix-based; hybrid office/remote; monthly field ride-alongs - 5–7 years home-health intake and authorizations experience - Associate’s in Health Admin; fluent with EHRs and payer portals

Background

Started as a scheduler juggling paper packets and Excel trackers. After a costly missed authorization, she built checklists to prevent revenue leaks and now owns referral-to-first-visit handoff.

Needs & Pain Points

Needs

1. Auto-import referral data into schedule-ready records. 2. Instant eligibility/authorization status at intake. 3. First-visit slot suggestions matching skill, coverage, compliance.

Pain Points

1. Referral details scattered across portals and emails. 2. Duplicate data entry between systems. 3. Delays between acceptance and first visit.

Psychographics

- Obsessed with zero rework, zero leakage - Thrives on turning chaos into checklists - Values speed with auditable traceability - Champions tools staff will actually use

Channels

1. LinkedIn - industry posts 2. NAHC Connect - forum 3. YouTube - workflow demos 4. Google Search - referral templates 5. CarePulse In-App - guides

A

After-Hours Anchor Andre

- 32–45; On-Call/Field Supervisor - Works nights/weekends; remote with rapid dispatch - Former caregiver; 8–10 years field operations - Lives in Midwest suburb; agency size 40–100 clients

Background

Cut his teeth covering last-minute call-offs in winter storms. Moved into supervision after proving he could stabilize weekends without compliance drift.

Needs & Pain Points

Needs

1. Live availability with skill and distance filters. 2. One-tap reschedule with compliant notes. 3. Instant EVV/location verification on exceptions.

Pain Points

1. Manual call trees lose precious minutes. 2. No-shows discovered too late at night. 3. Incomplete exception notes trigger audits.

Psychographics

- Calm in chaos, decisive under time pressure - Prioritizes client safety over convenience - Demands tools that load instantly - Hates phone trees and guesswork

Channels

1. CarePulse Mobile - alerts 2. WhatsApp - on-call 3. SMS - urgent texts 4. Phone Call - escalations 5. Google Maps - traffic

B

Billing Bridge Brianna

- 35–50; Revenue Cycle/Billing Lead - Based in Texas; hybrid home/office - 7–12 years home-health billing and EVV - Works across Medicare Advantage and state Medicaid plans

Background

Started as a receptionist reconciling paper timesheets with payor rules. After denial spikes, she learned EVV and clearinghouse workflows, becoming the go-to for clean claims.

Needs & Pain Points

Needs

1. EVV-compliant, payer-specific export formats. 2. Auto-flagged missing notes or mismatched timestamps. 3. One-click audit packet assembly.

Pain Points

1. Payer portals reject subtle timestamp mismatches. 2. Missing signatures stall entire batches. 3. Reformatting data for each payor.

Psychographics

- Zero-tolerance for preventable denials - Lives by payer-specific checklists - Prefers proof over promises - Values tidy, exportable data

Channels

1. Availity - payer portal 2. Waystar - clearinghouse 3. LinkedIn - revenue cycle 4. NAHC Connect - billing 5. CarePulse In-App - exports

T

Training Tuner Talia

- 28–42; Staff Educator/Trainer - Northeast city; mix of classroom and ride-alongs - 4–8 years coaching caregivers; former CNA - Manages 10–30 new hires monthly

Background

Built a micro-learning program after seeing new aides overwhelmed by long manuals. Uses real note examples to reduce retraining loops.

Needs & Pain Points

Needs

1. Sample notes library tied to voice clips. 2. Real-time doc quality feedback for trainees. 3. Offline-friendly mobile walkthroughs.

Pain Points

1. Inconsistent note quality across teams. 2. Tech anxiety on low-end devices. 3. Time-consuming retraining loops.

Psychographics

- Teach once, scale forever - Prefers practical over perfect theory - Advocates for tech that feels friendly - Measures learning by field outcomes

Channels

1. TalentLMS - courses 2. YouTube - micro-lessons 3. LinkedIn - educator network 4. CarePulse In-App - walkthroughs 5. WhatsApp - cohort chat

F

Family Touchpoint Felix

- 27–40; Client/Family Liaison - Works from office; high call volume - Communications degree; 3–6 years in patient relations - Serves diverse, multi-lingual families

Background

After volunteering as a hospital greeter, he learned the power of timely, clear updates. Joined home health to keep families informed without breaching privacy.

Needs & Pain Points

Needs

1. Easy-to-share visit summaries, plain language. 2. Quick status checks without chart diving. 3. Secure, HIPAA-safe sharing options.

Pain Points

1. Phone tag during peak hours. 2. Notes unclear for non-clinicians. 3. Anxiety about accidental PHI exposure.

Psychographics

- Clarity calms, jargon confuses - Protects privacy with zeal - Measures success by fewer callbacks - Empathic yet efficiency-minded

Channels

1. CarePulse Reports - PDFs 2. Phone Call - updates 3. Email - summaries 4. Facebook Groups - local community 5. Google Business - messages

T

Therapy Tracker Theo

- 33–48; PT or OT Lead - Mountain region; heavy driving between homes - 6–10 years outpatient and home-health mix - Supervises 6–12 therapists/assistants

Background

Moved from clinic director to home-health lead to reduce no-show gaps. Built route blocks and standardized progress note phrases to speed charting.

Needs & Pain Points

Needs

1. Time-blocked, discipline-aware route suggestions. 2. Trends of vitals and goals in one view. 3. Quick checklists tied to plan of care.

Pain Points

1. Fragmented PT/OT schedules cause conflicts. 2. Sensor data buried in charts. 3. Duplicate documentation across visits.

Psychographics

- Outcome-driven, schedule-protective - Loves clean trends over noisy data - Prioritizes patient goals over paperwork - Embraces tools that reduce drive-time

Channels

1. APTA Hub - forum 2. LinkedIn - therapy leadership 3. YouTube - documentation tips 4. Google Calendar - scheduling 5. CarePulse Mobile - updates

Product Features

Key capabilities that make this product valuable to its target users.

ETA Pulse

Continuously predicts arrival variance using live traffic, weather, historical dwell times (parking, elevators), and caregiver pace. Displays a simple risk badge (On‑Time, At‑Risk, Late) with minute-by-minute ETA deltas and preemptive leave‑now nudges. Helps coordinators prevent lateness before it starts and gives caregivers clear, actionable timing.

Requirements

Real-time Signal Ingestion
"As a coordinator, I want ETAs to reflect current traffic and weather so that schedules and interventions match real-world conditions."
Description

Continuously ingest and normalize live data feeds for traffic, weather, road closures, and transit disruptions, mapped to caregiver routes and visit geofences. Poll and stream updates at 1-minute intervals with rate-limit handling, caching, and graceful degradation when providers are unavailable. Resolve locations to the building/entrance level where possible to improve ETA precision. Provide a resilient pipeline with retries, circuit breakers, and observability (metrics, logs, alerts). Integrate with CarePulse’s routing and scheduling services to attach signal snapshots to each upcoming visit for downstream prediction.

Acceptance Criteria
Minute-Level Ingestion Cadence and Latency SLOs
Given configured providers for traffic, weather, road closures, and transit disruptions When the ingestion service is running under normal conditions Then polling occurs every 60 seconds ± 5 seconds per provider without drift across a 1-hour window And streaming subscriptions maintain end-to-end event lag ≤ 60 seconds at p99 and ≤ 30 seconds at p95 And normalized records are available to downstream consumers within 10 seconds at p95 and 20 seconds at p99 from provider receipt time And missed or delayed cycles are logged with reason codes and do not exceed 0.1% over a 24-hour period
Multi-Provider Rate-Limit Compliance and Backoff
Given a provider enforces request limits (QPS and daily quota) When requests are issued during peak load Then the ingestion service never exceeds published limits (0 provider 429s without prior quota forecast) And if a 429 with Retry-After is received, subsequent calls are delayed per Retry-After and succeed within the next polling window 95% of the time And if a 429/5xx without Retry-After is received, exponential backoff with full jitter is applied (initial 1s, factor 2, max 30s, max 5 retries) And total skipped polling cycles for that provider during backoff do not exceed 2 consecutive intervals And backoff, retries, and recoveries are recorded as metrics and structured logs with provider identifiers
Canonical Normalization and Data Quality Validation
Given heterogeneous provider payloads (traffic, weather, closures, transit) When records are ingested Then each record is transformed into the canonical schema v1 with required fields populated (provider, signal_type, coordinates WGS84, timestamp ISO8601, confidence, source_id) And JSON Schema validation passes for ≥ 99.9% of records per day; failures are quarantined with validation error codes and do not block the pipeline And units are normalized (distance in meters, speed in m/s, precipitation in mm/hr) with conversion accuracy verified via unit tests And location precision metadata (accuracy_m, entrance_level boolean) is set when available and defaulted when not And duplicate provider events (same source_id and timestamp) are de-duplicated idempotently within a 5-minute window
Geofence/Route Mapping and Entrance-Level Resolution
Given upcoming caregiver routes and visit geofences configured for the next 6 hours When new normalized signals arrive Then signals are spatially joined to the nearest relevant route segment or visit geofence within 50 meters and time window overlap And for addresses with entrance/portal data available, the resolved point-of-entry is selected based on approach direction; ≥ 80% of such visits achieve entrance-level precision with accuracy ≤ 15 meters And when entrance data is unavailable, the building centroid is used with precision flag set to building_level and accuracy estimate provided And mapping decisions (match id, distance_m, method, precision) are stored with the signal for auditability And unmatched signals are logged with reason codes and account for ≤ 1% of total relevant events
Resilience: Retries, Circuit Breakers, Caching, and Graceful Degradation
Given transient provider failures (network timeouts or 5xx) When failures occur within a 60-second window Then up to 3 inline retries are attempted with exponential backoff (1s, 2s, 4s) and 20–40% jitter before deferring to the next poll And the circuit breaker opens after 5 consecutive failures within 60 seconds and remains open for 60 seconds, then transitions to half-open with 1 probe request And while the circuit is open, cached last-good data (TTL 60s, stale-while-revalidate up to 120s) is served downstream with snapshot.stale=true And during a simulated 10-minute provider outage, downstream consumers continue to receive snapshots every minute with stale=true and no process crashes or queue buildup beyond 2 minutes lag And upon recovery, the system clears stale flags and returns to normal within 2 polling intervals
Observability: Metrics, Logs, and Alerts
Given the ingestion pipeline processes signals continuously When observing system telemetry Then metrics are emitted at 30-second intervals: ingest_lag_seconds, normalized_records_count, provider_error_rate, rate_limit_backoffs, circuit_breaker_state, cache_hit_ratio, snapshot_freshness_seconds, and mapping_match_rate And structured logs include correlation_id, provider, route_id/visit_id (when applicable), request_id, and error_code for failures And distributed traces span provider call → normalization → mapping → snapshot publish with ≥ 95% sampling for errors and 5% baseline sampling for success And alerts fire within 2 minutes when provider_error_rate > 5% for 5 consecutive minutes, ingest_lag_seconds p95 > 120 for 5 minutes, or snapshot_freshness_seconds p95 > 120 for 5 minutes And all metrics and alerts are visible in the shared dashboard with runbooks linked in alert annotations
Signal Snapshot Attachment to Upcoming Visits
Given a scheduling window of the next 6 hours of visits When the ingestion cycle completes each minute Then each upcoming visit has an attached signal snapshot containing timestamp, providers_included, mapping precision, entrance flag, and ETA-related deltas And snapshots are updated idempotently (same minute replaces prior minute) and versioned with monotonic sequence numbers And ≥ 99% of visits have a snapshot with age ≤ 2 minutes; 100% have age ≤ 5 minutes under normal conditions And snapshots are retrievable via the Scheduling Service API within 200 ms p95 and 500 ms p99 And a full audit trail exists linking snapshot → normalized signals → raw provider records by correlation_id
Historical Dwell Time Modeling
"As a coordinator, I want ETAs to account for typical building delays so that arrival predictions are realistic and fewer visits run late."
Description

Build and maintain a per-location historical profile of dwell components (parking search, lobby check-in, elevator wait, security gates) aggregated by time-of-day and day-of-week. Learn from completed visits to estimate typical non-travel overhead for each address, complex, or facility, with outlier filtering and automatic decay of stale data. Expose a low-latency lookup that the prediction engine can combine with route travel times. Allow coordinators to annotate locations with access notes that can adjust dwell heuristics. Store only operationally necessary metadata to align with privacy policies.

Acceptance Criteria
Per-Location Time-Sliced Dwell Profile Aggregation
Given completed visits at Location ID exist with component timestamps When the aggregation job runs within 5 minutes of each visit completion Then a per-location dwell profile is updated with day-of-week (Mon–Sun) and 30-minute time-of-day bins for components: parking, lobby check-in, elevator wait, security gate, and total dwell And each bin estimate is computed from filtered samples collected in the last 90 days And bins with >=5 valid samples use the mean of filtered samples; bins with <5 samples fall back to same-day-of-week 2-hour super-bin; if still <5 samples, fall back to location-wide median; if none exist, fall back to regional baseline And the updated profile version is available to the lookup service within 1 minute of aggregation completion
Learning Dwell Components from Completed Visits
Given a visit ends with caregiver check-out and route travel time is known When the ingestion pipeline processes the visit within 2 minutes Then component durations are derived from telemetry and events (e.g., GPS stop before arrival, lobby badge-in, elevator sensor) and mapped to parking, lobby, elevator, security gate, and total dwell And missing component signals result in component value = null while total dwell remains computed from non-travel overhead And duplicate telemetry for the same visit is de-duplicated so only one sample is stored per component per visit And samples are linked to a stable location_id that canonicalizes multiple entrances/units for the same facility
Outlier Filtering for Dwell Samples
Given a bin’s candidate samples over the last 90 days When the filter runs during aggregation Then samples beyond 3×MAD from the median or above the 99th percentile (whichever is stricter) are excluded from aggregates And the proportion of excluded samples is reported per bin and does not exceed 5% under nominal data conditions And when two synthetic outliers at 10× the bin median are injected into a test set of ≥30 samples, the resulting bin mean changes by ≤5% compared to the pre-injection baseline
Automatic Decay of Stale Data and Fallback Behavior
Given stored samples have timestamps When computing bin estimates Then sample weights decay exponentially with a half-life of 30 days And bins with an effective sample count (sum of weights) <5 fall back to the next broader context as defined (2-hour super-bin → location-wide → regional baseline) And if a location has no new visits for 90 days, its profile is marked Stale and the lookup returns regional baseline with reason=stale_profile And a sample that is 30 days old contributes 50% of the weight of a new sample; at 60 days, 25%
Low-Latency Dwell Lookup API for Prediction Engine
Given the prediction engine requests a dwell estimate When it calls GET /dwell-estimate?location_id={id}&dow={0-6}&time={HH:MM} Then the service returns 200 with JSON including: location_id, time_bin, dow, components {parking,lobby,elevator,security_gate,total}, confidence [0–1], source_level {bin|super_bin|location|regional}, reason if fallback applied, profile_version And at 200 RPS with typical payloads, p95 latency ≤50 ms and p99 latency ≤120 ms measured over 10-minute windows And the service has ≥99.9% successful response rate over a rolling 30 days And when no location profile exists, the service returns regional baseline with source_level=regional and reason=no_profile
Coordinator Access Notes Adjust Dwell Heuristics
Given a coordinator with role=Coordinator adds an access note to a location with tags and adjustment rules (e.g., +3 min elevator weekdays 08:00–10:00) When the note is saved Then the adjustment applies to matching bins within 1 minute, modifying component and total estimates by the specified bounded delta (max |±10 minutes| or |±25%|, whichever smaller) And all note changes (create/update/delete) are audit logged with user, timestamp, old→new, and TTL And when multiple notes apply, deterministic precedence is applied: more specific time window > tag-specific > general; deltas are summed then clamped to bounds And only users with Coordinator or Admin roles can create/update/delete notes; Readers cannot And when a note expires (TTL reached) or is deleted, its adjustments no longer affect estimates
Privacy-Minimized Storage and Governance
Given privacy policies require minimal necessary data When storing dwell samples and aggregates Then only operational fields are stored: location_id (non-PII), facility_type, timestamps, day_of_week, time_bin, component durations, data_source, profile_version, anonymized caregiver role; no patient identifiers, no caregiver names, no raw audio, and no exact street address beyond the canonicalized location_id And data at rest is encrypted (AES-256) and in transit via TLS 1.2+ And access is role-scoped; only system services and authorized analysts can query raw samples; all access is logged And automated schema checks in CI fail builds if disallowed fields are introduced And raw sample retention ≤365 days and aggregated profiles ≤730 days, after which data is purged
Privacy‑Preserving Pace Profiling
"As a caregiver, I want the system to learn my typical pace so that ETAs and leave-now nudges fit how I actually move between visits."
Description

Compute caregiver-specific pace factors (walking speed, average parking time, readiness buffer) using recent, consented telemetry and visit outcomes, while protecting personal data. Use on-device or server-side aggregation that stores only derived pace coefficients, not raw location trails, and allow caregivers to opt out. Automatically adapt pace factors based on terrain, time-of-day, and vehicle mode. Provide APIs for the prediction engine to adjust travel and prep estimates per caregiver without exposing identifiable movement history.

Acceptance Criteria
Store Only Derived Pace Coefficients
Given consented telemetry and visit outcome data are available for a caregiver When the pace model processes the data Then only derived coefficients (walking_speed_mps, avg_parking_minutes, readiness_buffer_minutes, vehicle_mode_biases, terrain_biases, time_of_day_biases, confidence_score) are persisted and linked to caregiver_id_hash And Then no raw GPS coordinates, route polylines, timestamp streams, accelerometer samples, or Wi‑Fi/Bluetooth scans are persisted to databases, object storage, analytics lakes, or logs And Then system and application logs redact any location payloads and store only event counts and sizes And Then a compliance query for that caregiver returns zero records containing fields lat, lon, altitude, speed_raw, or polyline
Consent Gating and Opt‑Out Handling
Given a caregiver has not provided telemetry consent When the app is used Then no telemetry is collected or transmitted and generic default pace coefficients are supplied to the prediction engine Given a caregiver provides consent When telemetry is available Then collection and aggregation are enabled and the consent status change is audit‑logged with timestamp and version Given a caregiver revokes consent When opt‑out is confirmed Then telemetry collection stops immediately, any cached raw telemetry is discarded, derived coefficients for that caregiver are deleted within 24 hours, and the prediction engine falls back to generic defaults within 5 minutes And Then the caregiver can view and change consent in Settings with the current status displayed
Context‑Aware Pace Adaptation
Given labeled test routes covering terrain (flat, hilly, stairs/elevator), time‑of‑day (peak, off‑peak), and vehicle mode (walking, driving, transit) When pace factors are computed Then the model outputs context modifiers for terrain, time‑of‑day, and vehicle mode that adjust the base coefficients And Then ETA mean absolute error over the last 60 days is reduced by ≥10% versus a non‑contextual baseline for the same caregivers And Then predicted walking_time per 100 m differs by ≥30% between walking and driving contexts in the expected direction And Then each context bias is bounded within ±25% of its respective base coefficient
Pace Coefficients API (Privacy‑Preserving)
Given the prediction engine requests pace data for a caregiver via the API with valid authorization When the request is processed Then the response contains only non‑identifying fields: caregiver_id_hash, coefficients, context_modifiers, confidence_score, model_version, updated_at, ttl_seconds And Then the response contains no fields with raw location history (no lat/lon arrays, addresses, timestamp arrays, or polylines) And Then p95 latency is ≤200 ms and availability is ≥99.9% over a rolling 7‑day window And Then requests lacking scope predict:pace are rejected with 403 and are audit‑logged without payloads And Then ETag or versioning is provided to support caching and downstream reproducibility
Aggregation Modes and Raw Telemetry Handling
Given a device supports on‑device aggregation When telemetry is collected Then pace coefficients are computed on‑device and only derived coefficients and minimal diagnostics are uploaded; no raw telemetry leaves the device Given on‑device aggregation is unavailable When telemetry is uploaded for server‑side aggregation Then raw telemetry is held only in volatile processing buffers and is deleted immediately after coefficient computation completes, with no persistence to disk And Then processing pipelines enforce data retention rules that prevent any raw telemetry from being written to durable storage And Then operational dashboards report zero bytes of raw telemetry at rest
Auditability and Deletion Verification
Given any consent change or coefficient update occurs When audit logs are queried by an authorized compliance role Then entries include timestamp, caregiver_id_hash, operation (consent_granted, consent_revoked, coefficients_updated, coefficients_deleted), actor, and outcome Given a caregiver revokes consent When deletion jobs run Then a verifiable deletion record is created and a DSAR report shows no remaining derived coefficients for that caregiver within 24 hours And Then scheduled compliance tests simulate consent revocation using a test caregiver and pass with 0 failures
Minute‑by‑Minute ETA Prediction Engine
"As a coordinator, I want accurate, continuously updating ETAs so that I can intervene early and keep visits on time."
Description

Produce rolling ETA forecasts for each upcoming visit by fusing live traffic, weather, historical dwell profiles, caregiver pace factors, and current location/route plan. Recalculate at 60-second cadence or on significant state change, with latency under 500 ms per prediction at target scale. Output predicted arrival time, variance, and confidence score, with fallback heuristics when inputs are missing. Provide versioned models and feature flags for safe rollout and A/B calibration. Integrate with CarePulse’s scheduling to select the next best route and with notification services for downstream nudges and alerts.

Acceptance Criteria
Cadence and Triggered Recomputations
Given an active route for a scheduled visit, When 60 seconds have elapsed since the last prediction, Then the engine computes and persists a new ETA. Given any of the following significant state changes: deviation >100 m from planned route, sustained speed change >25% for 15 seconds, a new traffic incident impacting the current route segment, a weather severity change to moderate or higher on any remaining segment, or a facility/parking geofence enter/exit, When detected, Then a new ETA is computed within 5 seconds. Given no significant state change and less than 60 seconds elapsed, When polled, Then no recomputation occurs. Given a caregiver with multiple upcoming visits, When recomputing, Then ETAs are updated for the next 3 upcoming visits in priority order.
Low-Latency Prediction at Target Scale
Given a steady-state load of 200 predictions per second with bursts to 500 predictions per second for up to 60 seconds, When executing predictions, Then p95 latency is ≤500 ms, p99 latency is ≤750 ms, and error rate (non-2xx) is <0.1% measured at the service boundary. Given upstream data providers respond slowly or time out, When computing a prediction, Then the engine returns a result using cached or fallback inputs within ≤500 ms p95. Given a cold start of a new service instance, When the first prediction request arrives, Then latency is ≤750 ms and stabilizes to ≤500 ms p95 within 2 minutes.
Output Contract and Confidence Calibration
Given a prediction request, Then the response includes: visit_id, caregiver_id, predicted_arrival_time (ISO 8601 UTC), eta_delta_minutes (signed), variance_minutes (float), confidence_score (0..1), risk_state {On-Time|At-Risk|Late}, model_version, input_flags[], recompute_reason, per_minute_deltas[0..15], and generated_at (ISO 8601 UTC). Given a rolling 14-day validation window with ground-truth arrivals, When evaluating predictions with confidence_score ≥0.8, Then the 80th-percentile absolute arrival error is ≤3 minutes. Given variance_minutes represents a 1-sigma interval, When evaluated over the same window, Then the true arrival falls within ±variance_minutes in 60%–76% of cases. Given configured thresholds On-Time (eta_delta_minutes ≤ +1), At-Risk (+2 to +4), Late (≥ +5), When classifying risk_state, Then 100% of assignments match the thresholds.
Fallback Behavior with Missing Inputs
Given live traffic data is unavailable for >120 seconds, When computing a prediction, Then historical segment speeds are used, input_flags includes "traffic_missing", variance_minutes increases by ≥30%, and confidence_score decreases by ≥0.1. Given weather data is unavailable, When computing a prediction, Then standard dry conditions are assumed, input_flags includes "weather_missing", and variance_minutes increases by ≥10%. Given GPS location is stale (>90 seconds since last fix), When computing a prediction, Then last known location and caregiver pace factor are used, input_flags includes "location_stale", and recomputation cadence reduces to every 180 seconds until fresh location resumes. Given all external inputs are unavailable, When computing a prediction, Then an ETA is returned using planned route and historical dwell profiles within latency SLO, and input_flags includes "degraded_mode".
Scheduling Integration for Next-Best Route
Given a caregiver has multiple sequenced upcoming visits, When the predicted ETA for the next visit is Late (eta_delta_minutes ≥ +5), Then the engine computes at least one next-best route option with predicted improvement ≥2 minutes and publishes the option(s) to the Scheduling API within 10 seconds. Given the Scheduling service accepts a proposed route option, When the acknowledgment event is received, Then the engine switches the active route for subsequent predictions within 5 seconds and records the option_id and model_version. Given no route option yields an improvement ≥2 minutes, When evaluating alternatives, Then no proposal is sent and telemetry records reason="no_benefit".
Feature Flags and Safe Rollout (A/B)
Given tenant-level feature flag eta_engine_v2 is OFF, When serving predictions, Then the baseline model_version is used for 100% of traffic and no A/B assignments occur. Given eta_engine_v2 is ON with a 50/50 split, When serving predictions, Then caregivers are deterministically bucketed, responses include variant assignment, and exposure events are logged with ≥99.9% success within 1 second. Given a rollback is initiated, When toggling the flag to 0%, Then 100% of new predictions switch to the baseline model_version within 1 minute without increased error rate. Given shadow evaluation is enabled at 5%, When serving predictions, Then paired predictions are produced for the shadow model in telemetry without affecting live results.
Nudge and Alert Emission
Given risk_state transitions to At-Risk or Late, When a prediction is computed, Then an eta_risk_changed event is published to the Notification service within 2 seconds including visit_id, caregiver_id, risk_state, eta_delta_minutes, confidence_score, and suggested leave_by time when applicable. Given the computed leave-now threshold is reached (travel_time + expected dwell + buffer > time_to_appointment), When a prediction is computed, Then a leave_now nudge event is emitted at most once every 10 minutes per visit (debounced) until the threshold clears. Given risk_state improves within a 5-minute window (e.g., Late→At-Risk→On-Time), When emitting notifications, Then downgrade notifications are deduplicated so only the latest state is delivered. Given the Notification service is unavailable, When emitting events, Then events are queued and retried with exponential backoff for at least 15 minutes, and per-visit ordering is preserved upon recovery.
Risk Badge & ETA Delta Display
"As a coordinator, I want a clear risk badge and ETA delta for each visit so that I can triage quickly without digging into maps."
Description

Compute a simple status badge (On‑Time, At‑Risk, Late) from ETA vs. scheduled window and policy-specific thresholds, and display minute-by-minute deltas (e.g., +6 min) in caregiver and coordinator UIs. Support configurable thresholds per payer or agency policy and color/label mappings that meet accessibility standards. Surface confidence indicators and brief explanations (e.g., elevator wait risk high) to aid decision-making. Ensure lightweight rendering on mobile and efficient data subscription to minimize battery and bandwidth use.

Acceptance Criteria
Caregiver In-Transit: Live Badge and Delta Updates
Given a visit with a scheduled window (start, end) and policy thresholds for At-Risk and Late When the live ETA changes by ≥1 minute from the last displayed ETA Then the risk badge is recalculated and updated within 5 seconds according to the configured thresholds And the ETA delta label shows a signed minute value (e.g., +6, −3), rounded to the nearest minute And updates occur at most once per minute while in the foreground And the caregiver mobile view and coordinator dashboard reflect the same badge and delta within 10 seconds of each other And each badge state transition is logged with timestamp and reason code
Admin Config: Thresholds and Color/Label Mapping
Given agency defaults and payer-specific overrides for At-Risk and Late thresholds and for color/label mappings When an admin edits and saves configuration Then validation prevents save if thresholds are missing, negative, or overlapping And payer overrides take precedence over agency defaults for visits with that payer And the updated thresholds and color/label mappings take effect for new calculations within 1 minute of save And an audit log entry records who changed what, with before/after values And existing screens show the updated labels and colors without requiring app restart or page refresh
Accessibility Compliance: Badge Colors and Labels
Given the badge and ETA delta are rendered in light and dark modes on mobile and web Then badge text and background meet WCAG 2.1 AA contrast ≥ 4.5:1 in both modes And the state is conveyed via text label and icon in addition to color (no color-only cues) And screen readers announce "Status: <On‑Time|At‑Risk|Late>, ETA delta: <+N/−N> minutes" And the badge is keyboard focusable with a visible focus indicator and a minimum touch target of 44×44 px And dynamic type/text scaling up to 200% preserves readability and does not truncate the delta or state
Explanations and Confidence Indicators Display
Given a computed risk badge with contributing factors (traffic, weather, dwell-time, caregiver pace) When confidence is <60% or any single factor risk is high Then show a brief explanation chip of ≤50 characters (e.g., "Elevator wait risk high") And show a confidence indicator as text (Low/Medium/High) with a percentage and last-updated time And tapping the chip opens a details view listing the top 3 factors with their weights/impact And when confidence ≥80% and no high-impact factors exist, the explanation chip is hidden And the confidence and explanation update within 10 seconds of a new ETA calculation
Mobile Performance and Network Efficiency
Given the caregiver app runs on a reference low-end Android device (2 GB RAM, 2019) When the badge and ETA delta are updating once per minute during an active trip Then average CPU use attributable to the component is ≤3% over 5 minutes And incremental network usage for ETA/risk updates is ≤15 KB per minute per active trip via delta subscriptions And background mode reduces update frequency to ≤1 per 5 minutes and pauses location polling when unchanged for 2 minutes And battery drain attributable to the component is ≤2% per hour during active navigation over a 30-minute test And memory usage of the component remains ≤50 MB with no leaks after 30 minutes
Policy Thresholds: Edge Cases and Classification
Given agency thresholds: At‑Risk if predicted lateness is 1–5 minutes, Late if >5 minutes (example policy) And a visit window end is 10:00 When predicted ETA is 09:55 Then the badge shows On‑Time and the delta shows −5 When predicted ETA is 10:02 Then the badge shows At‑Risk and the delta shows +2 When predicted ETA is 10:07 Then the badge shows Late and the delta shows +7 And when predicted ETA equals exactly the window end time Then the badge shows On‑Time
Preemptive Leave‑Now Nudges
"As a caregiver, I want timely leave-now prompts so that I can adjust my actions and avoid arriving late."
Description

Generate proactive notifications that tell caregivers when to depart to meet the next visit’s on-time window, based on predicted prep time, travel, and dwell. Support actionable options (Leave now, Snooze, Recalculate, Notify coordinator) with quiet hours, driving mode detection, and minimal interruption patterns. Escalate to coordinators if nudges are repeatedly ignored and lateness risk remains high. Localize content and honor caregiver preferences and compliance constraints. Log nudge timing and outcomes to improve future recommendations.

Acceptance Criteria
Leave‑Now Nudge Timing and Threshold
Given a scheduled visit with on‑time window and computed prep, travel, and dwell times When current time reaches the computed required departure time Then a “Leave Now” nudge is delivered within 5 seconds Given ETA risk changes from On‑Time to At‑Risk due to new data and departure is within 15 minutes When the risk transition is detected Then a preemptive nudge is sent within 60 seconds Given predicted on‑time arrival and required departure is more than 30 minutes away When evaluating nudge need Then no leave‑now nudge is sent Given GPS indicates the caregiver has already departed toward the destination When evaluating nudge suppression Then do not send a leave‑now nudge and show inline “En route detected” confirmation
Action Buttons: Leave Now, Snooze, Recalculate, Notify
Given a leave‑now nudge is displayed When the caregiver views the notification Then the actions Leave Now, Snooze, Recalculate, and Notify Coordinator are present and tappable Given the caregiver taps Leave Now When the action is processed Then navigation opens in the preferred maps app, caregiver status updates to “En route,” the nudge dismisses, and the action is logged with timestamp Given the caregiver taps Snooze When prompted for a duration Then options for 5/10/15 minutes are shown, the selected snooze suppresses further nudges for that duration, and the selection is logged Given the caregiver taps Recalculate When recomputation is triggered Then ETA and risk recompute within 5 seconds and the same notification updates in place without creating a new OS notification Given the caregiver taps Notify Coordinator When the request is sent Then the assigned coordinator receives a message with current ETA delta and risk badge within 10 seconds, and delivery is logged Given any action fails When an error occurs Then a retry option is presented and the error code is logged
Quiet Hours and Interruption Minimization
Given the current time is inside the caregiver’s configured quiet hours When a leave‑now nudge is generated Then deliver it as a silent notification to inbox/lock‑screen summary, with no sound/vibration, and show in‑app banner only if app is foreground Given multiple forecast changes within a short period for the same visit When sending nudges Then consolidate to at most 1 nudge per 10‑minute window unless required departure shifts by ≥3 minutes or risk worsens (On‑Time→At‑Risk or At‑Risk→Late) Given device is in Do Not Disturb and caregiver has not opted into critical overrides When a nudge would be sent Then suppress audible/vibration channels; if risk becomes Late within 10 minutes of visit start and caregiver opted in, send a critical alert override
Driving Mode Detection and Nudge Suppression
Given the device reports driving mode active or speed > 10 mph for ≥30 seconds When a leave‑now nudge would be shown Then show a minimal, non‑interactive banner or TTS‑safe prompt and defer action buttons until stationary Given the caregiver remains in driving state When further nudges are evaluated Then send no more than one driving‑mode‑safe prompt every 15 minutes Given speed drops below 3 mph for ≥20 seconds and the nudge is still relevant When regaining interactivity Then present the latest actionable nudge with full actions
Escalation After Ignored Nudges Under High Risk
Given two consecutive leave‑now nudges for the same visit have no action within 3 minutes each and risk is At‑Risk or Late When the second ignore window elapses Then notify the assigned coordinator within 10 seconds with caregiver’s ETA delta, risk badge, and last‑seen location accuracy Given a coordinator notification attempt fails When retry policy runs Then retry up to 3 times over 5 minutes with exponential backoff and log each failure Given visit start is more than 15 minutes away and risk is only On‑Time When evaluating escalation Then do not escalate until within 15 minutes of visit start or risk becomes Late Given escalation occurs When generating the audit trail Then link the triggering nudges, timestamps, delivery receipts, and coordinator acknowledgement in the visit timeline
Localization and Preference Honor
Given caregiver’s preferred language and locale are set When rendering any nudge Then title, body, and action labels are fully localized with correct date/time/number formats, with fallback to English on missing keys Given caregiver preferences for navigation app, snooze default, quiet hours, and escalation opt‑in are configured When a nudge is generated Then those preferences are applied, changes take effect within 1 minute of update, and persist across sessions Given accessibility needs When interacting with the nudge Then all actions have VoiceOver/TalkBack labels, meet WCAG AA contrast, and critical info is visible without truncation on 320‑px‑wide screens, including RTL support
Nudge Logging and Outcome Tracking
Given any nudge is generated or acted upon When logging the event Then store visit_id, caregiver_id, generated_at, predicted_departure_time, risk_before, risk_after (if updated), action_taken, action_timestamp, eta_at_action, driving_mode_flag, quiet_hours_flag, locale, device_id (hashed), and error_code (if any) Given intermittent connectivity When a log write fails Then queue and retry until success, with 99% of events persisted within 2 seconds and visible in analytics within 15 minutes Given nightly model updates When preparing training/AB datasets Then include nudge timing, actions, and outcomes; exclude PII per policy; and respect consent flags
Coordinator Triage Board
"As an operations coordinator, I want a consolidated view of at‑risk visits with quick actions so that I can prevent lateness across the shift."
Description

Provide a live board that ranks upcoming visits by lateness risk and ETA delta, with filters by caregiver, region, payer, and shift. Offer bulk actions (reassign, notify caregiver, update patient) and quick links to route alternatives. Show contributing factors to risk (traffic slowdown, high dwell, slow pace) to support rapid decisions. Syncs with CarePulse scheduling so changes propagate to mobile devices in real time. Include audit-friendly activity logs for all triage actions and outcomes.

Acceptance Criteria
Risk-Ranked Board Sorting and Badges
Given upcoming visits have computed risk badges (On‑Time, At‑Risk, Late) and ETA deltas When the triage board loads or live data refreshes Then visits are ordered by risk priority Late > At‑Risk > On‑Time and within each group by descending absolute ETA delta; ties break by nearest scheduled start time Given an ETA delta for any visit changes by ≥1 minute When live data is received Then the visit’s ETA delta and risk badge update and the list re-sorts within 60 seconds without a full page reload Given no live update is received for more than 120 seconds When the board is visible Then a visible stale-data indicator and last-updated timestamp are shown
Multi-Dimensional Filters and Counts
Given filters for caregiver, region, payer, and shift are available When the user applies any combination of filters Then only matching visits are displayed and the total result count updates accordingly Given filters are applied When the page is refreshed within the same session Then the last-applied filters persist until cleared Given the applied filters yield zero results When the board renders Then an empty-state message appears with an action to clear all filters
Bulk Reassign Action with Constraints
Given the user selects 1–50 upcoming visits When they choose Bulk > Reassign and select a target caregiver Then only caregivers without schedule conflicts and within the visit’s region are selectable; ineligible caregivers are disabled with a reason tooltip Given a valid reassignment is confirmed When the action completes Then each affected visit shows the new caregiver, and CarePulse scheduling and the caregiver mobile app reflect the change within 15 seconds; per-visit failures display with retry controls Then an audit entry is created per visit capturing actor, before/after caregiver, timestamp, and outcome
Bulk Notify Caregivers and Update Patients
Given the user selects one or more visits When Bulk > Notify Caregiver is sent using a selected template Then each caregiver receives one notification via their configured channel (in‑app/push/SMS) and delivery status (sent/failed) appears per recipient within 60 seconds Given selected visits include only patients with messaging consent When Bulk > Update Patient (ETA) is sent with a selected template Then patients receive an ETA update and each visit records message timestamp and channel; non‑consented patients are skipped with an explanatory reason Then all notifications create audit entries with template ID, recipient, timestamp, and outcome
Route Alternatives Quick Links
Given a visit is marked At‑Risk or Late When the coordinator opens Route Alternatives Then at least two alternatives are displayed within 5 seconds, each showing predicted arrival time and delta versus current route When the coordinator applies a selected alternative Then the caregiver’s route updates and the caregiver mobile app receives the new route within 15 seconds; if the mapping service fails, a clear error is shown and no change is saved Then the visit’s ETA delta recalculates and the board re‑sorts accordingly
Contributing Factors Attribution
Given a visit is At‑Risk or Late When the row is expanded or details are viewed Then the top 1–3 contributing factors (e.g., traffic slowdown, high dwell time, slow caregiver pace) are shown with quantitative metrics (e.g., mph vs baseline, minutes vs baseline, pace vs baseline) and last‑updated timestamps Given new telemetry or historical data updates a contributing factor When the factor value changes Then the factor list and metrics refresh on the board within 60 seconds Given a visit is On‑Time When details are viewed Then no contributing factors are displayed
Real-Time Sync and Audit Logging
Given any triage action (reassign, route change, caregiver notify, patient update) is confirmed When the change is committed Then the schedule backend and caregiver mobile reflect the change within 15 seconds; if the device is offline, the update is queued and delivered within 15 seconds of reconnection with a visible Pending sync status until delivered Given concurrent edits occur on the same visit When a conflict is detected Then the user is prompted with resolution options and no silent overwrite occurs; the final applied change is timestamped Then an immutable audit entry is recorded per action including timestamp (UTC ISO 8601), actor, action type, affected entities (visit, caregiver, patient), before/after values, and outcome; entries are viewable in the activity log and exportable to CSV for a specified date range

Credential Swap

When drift risk spikes, ranks backup caregivers by proximity, required credentials, client preferences, authorization fit, and overtime impact. Offers one‑tap propose/accept flows that auto-update schedules, EVV, and compliance notes. Eliminates phone tag while ensuring substitutes are qualified and cost‑smart.

Requirements

Real-time Drift Risk Triggering
"As an operations manager, I want automatic alerts when a scheduled visit becomes at risk so that I can trigger a qualified swap before we miss on-time and compliance windows."
Description

Continuously monitors live route ETA, EVV pings, caregiver location variance, traffic incidents, call-off signals, and IoT sensor anomalies to compute a visit-level drift risk score. When a configurable threshold is exceeded, automatically initiates a Credential Swap evaluation. Integrates with CarePulse’s scheduling and routing services, using minute-level updates and debounced alerts to avoid noise. Supports configurable business rules (e.g., payor-specific on-time windows) and quiet hours. Emits structured events to drive ranking, notifications, and audit logging.

Acceptance Criteria
Trigger Swap on Risk Threshold Breach (Active Visit)
Given an active visit with live ETA, EVV pings, caregiver GPS, traffic feed, call-off signals, and IoT sensors connected And a risk threshold T and debounce window D minutes configured for the visit's payor When the computed drift risk score S >= T for at least D consecutive minutes Then emit a single DriftRiskExceeded event for the visit occurrence with an idempotencyKey And invoke the Credential Swap evaluation within 5 seconds of the qualifying reading And set the visit state to RiskEvaluationPending in scheduling And do not emit another DriftRiskExceeded until S < T for D consecutive minutes and then S >= T again And if a caregiver call-off signal is received, emit DriftRiskExceeded immediately and invoke evaluation, bypassing debounce
Debounced Risk Alerts Near Threshold
Given a debounce window D=2 minutes and a cooldown C=10 minutes configured When S rises above T for less than D minutes and falls below T Then no DriftRiskExceeded event is emitted When S crosses above and below T multiple times within C after a valid trigger Then only the first qualifying breach emits an event; subsequent breaches within C are suppressed And a new event may emit only after S remains below T for D consecutive minutes and C has elapsed
Business Rules: Payor On-Time Windows Applied
Given a payor rule with on-time window W minutes for visit start and max location variance V meters When computing S for a visit under that payor Then lateness and variance components use W and V from the payor rule And if no payor-specific rule exists, the system default rule is applied And the applied rule identifier is included in the DriftRiskExceeded event payload
Quiet Hours Behavior
Given organization quiet hours Q and notification channel configurations When a qualifying risk trigger occurs during Q Then initiate Credential Swap evaluation and write an audit log entry And suppress or queue user notifications per channel configuration without dropping the event And include quietHours=true and suppressionReason in the event payload
Signal Ingestion Resilience and Fallback
Given at least one signal among EVV, GPS, route ETA, traffic, call-off, or IoT received within the last 5 minutes When computing S Then calculate S from available signals and flag missingSources[] in the payload And if no signals have been received within the last 5 minutes Then do not compute S; do not trigger; emit an InsufficientSignals audit event with visitId and lastSeen timestamps
Structured Event Schema and Delivery Guarantees
Given a qualifying threshold breach When emitting DriftRiskExceeded Then include visitId, caregiverId, clientId, timestamp (UTC ISO-8601), riskScore, threshold, topContributors[], payorRuleId, geo.lat, geo.lon, debounceWindowId, idempotencyKey, correlationId in the payload And publish to ranking, notifications, and audit topics within 5 seconds P95 and 10 seconds P99 And ensure idempotent delivery using the idempotencyKey (no duplicates observed in consumers over 10,000 test messages) And retry with exponential backoff up to 5 attempts; on failure, route to dead-letter queue with error details
Minute-Level Risk Updates Performance and Accuracy
Given 1,000 concurrent active visits under typical load When computing drift risk continuously Then each visit's risk score updates at least once every 60 seconds P95 and 90 seconds P99 And per-visit compute latency is <=500 ms P95 and <=1,000 ms P99 And computed scores match reference fixtures within ±1 point across 50 representative test cases
Multi-factor Backup Ranking
"As a scheduler, I want a ranked list of qualified backups with clear reasons so that I can choose the best substitute balancing care quality, compliance, and cost."
Description

Generates a ranked list of substitute caregivers using weighted criteria: proximity to client and route alignment, required credentials and skill tags, client preferences (e.g., language, gender, continuity), authorization fit by payor/program, overtime and labor law impact, and schedule conflicts. Provides explanation-of-rank with factor breakdowns and blockers. Supports configurable weights per agency policy and payor. Integrates with staff profiles, credential vault, timesheets, scheduling, and payroll to ensure data freshness. Exposes API and UI components to consume the list in workflows.

Acceptance Criteria
Weighted Multi-Factor Ranking Generation
Given a drift risk spike is detected for a specific visit with client, location, scheduled time, payor, and required credentials And agency policy weights are configured for proximity, route alignment, credentials/skills, client preferences, authorization fit, overtime impact, and schedule conflicts And there are at least 10 eligible caregivers in the service area When the system generates the backup ranking Then it returns a list sorted by total weighted score (highest first) And includes all caregivers who meet mandatory constraints (active status, non-expired required credentials, authorization eligibility, no hard client constraints violated) within the configured search radius (default 30 miles) And excludes candidates with unresolvable schedule overlaps given travel times and required buffers; ties are resolved by lower ETA, then lower projected overtime cost, then higher continuity days, then lexicographic caregiver ID And completes generation within 1.5 seconds for a pool of up to 500 caregivers on median production hardware And logs the request with correlation ID, input parameters, factor weights applied, candidate count, and execution time
Credential and Authorization Verification
Given a visit requires credential set X and payor Y with program Z authorization rules And caregiver profiles and the credential vault include credential types with issue/expiry dates and skill tags And authorization data includes remaining units/hours for the client under payor Y program Z When computing eligibility for ranking Then exclude any caregiver with missing or expired required credentials or missing required skill tags and annotate with blocker codes and expiry dates And exclude any caregiver whose assignment would exceed remaining authorized units/hours or violate payor-specific hard constraints And include only caregivers with background check and license statuses marked Active as of the visit date/time And produce a machine-readable blocker list and a human-readable explanation for each excluded caregiver
Client Preference and Continuity Scoring
Given the client's stored preferences specify language, gender, and continuity settings (hard vs soft) And historical visit data is available for caregiver-client pairings with last-visit dates When calculating preference and continuity contributions Then award the configured weight for matched language and gender; apply the configured penalty for soft preference mismatches; exclude candidates when hard preferences would be violated And add a continuity bonus proportional to recency and frequency (e.g., +X for last seen ≤14 days, +Y for ≥N visits in last 90 days) And include in the per-candidate explanation which preferences matched/missed and the computed continuity metrics
Overtime and Labor Law Impact Assessment
Given timesheets, scheduled hours, and payroll rules are available for the current pay period And jurisdictional labor-law rules (daily/weekly thresholds, rest periods, max hours) are configured for each caregiver When estimating the impact of assigning the visit to each candidate Then compute projected total hours including travel to determine overtime hours and cost at the caregiver's wage rate And exclude candidates when assignment would violate hard labor rules (e.g., rest period breach, max hours) and record a blocker code And apply the configured negative score for projected overtime while still allowing selection when policy permits And expose in the explanation the projected total hours, projected OT hours, and rules applied
Explanation-of-Rank via API and UI
Given a ranking has been generated for a visit When retrieving results via API and UI components Then each candidate item includes a factor_breakdown with, for each factor, the weight applied, raw value, normalized value (0..1), weighted contribution, and total_score And each excluded candidate appears in a blocked collection with blocker codes and remediation details when applicable And the API response conforms to versioned schema rank.v1 and returns within 500 ms for cached results and 2 s for fresh computations (P95) And the UI component displays expandable explanations and blocker tooltips meeting WCAG 2.1 AA and renders correctly on screens ≥360 px width
Configurable Weights by Agency and Payor
Given agency-level default weights and optional payor-specific overrides are configured and versioned And a visit is associated to an agency and payor When computing factor weights Then apply the most specific active configuration (payor override over agency default) and normalize weights to sum to 1.0 And reject activations of configurations with invalid ranges (weights outside 0..1) or missing mandatory factors with a validation error And record an audit trail for all configuration changes with actor, timestamp, before/after values, scope, and reason, retrievable for 12 months And provide a simulate mode that previews scores under an alternate configuration without persisting changes
Data Freshness and Integration SLA
Given integrations to staff profiles, credential vault, timesheets, scheduling, and payroll are active And a ranking request is initiated When sourcing data for scoring and eligibility Then use data no older than 5 minutes for dynamic sources (timesheets, scheduling, payroll) and 24 hours for static sources (licenses/credentials) or flag the item as stale in the explanation And fall back gracefully when a source is temporarily unavailable by marking affected factors as unknown, excluding only when policy requires, and emitting a degraded mode metric And emit observability metrics for data staleness, integration errors, and ranking latency with P95/P99 dashboards and alerting thresholds
One-tap Propose/Accept Flow
"As a caregiver, I want to accept or decline a proposed swap with one tap so that I can confirm availability quickly without phone calls or manual coordination."
Description

Delivers mobile push and in-app cards to candidate caregivers with a one-tap accept/decline action, auto-filling shift details and compensation. Supports timeboxed responses, cascading to next-ranked candidates, and optional parallel soft-holds with first-come-first-serve locking to prevent double assignment. Provides an ops-side dashboard to review responses and override when needed. Includes accessibility, localization, and offline-safe queued actions. Integrates with notification service, caregiver app, and scheduler UI.

Acceptance Criteria
Mobile Push Delivery with One‑Tap Actions
Given a caregiver is ranked as a candidate and has a valid device push token and notifications enabled When a shift offer is generated Then a push notification is sent within 5 seconds containing client initials, shift start/end, location (approximate), compensation, and one‑tap Accept and Decline action links And the action links are single‑use, signed for the intended caregiver, and expire when the response window closes And if delivery fails or the device is unreachable, the offer remains available via the in‑app offer card without duplicate offers Given the caregiver taps Accept from the push When the app is foregrounded via the deep link Then the Accept is submitted automatically without additional form steps and receives a server acknowledgment within 2 seconds on a healthy network And duplicate taps or retries are idempotently ignored and do not create multiple assignments Given the caregiver taps Decline When the app is opened via the deep link Then the decline is recorded with an optional reason code and the offer is no longer presented to that caregiver
In‑App Offer Card with Prefilled Shift Details
Given a caregiver is an active candidate for a shift When they open the caregiver app or receive the push Then an in‑app offer card is visible at the top of the home screen within 1 second of data sync And the card displays client initials only, shift date/time, estimated drive time from current/last known location, pay rate (including differentials), required credentials, and any special instructions And Accept and Decline buttons are prominent with a minimum 44x44pt touch target Given the caregiver taps Accept on the card When the confirmation sheet appears Then all fields (start time, end time, address, pay rate, EVV method) are prefilled and read‑only And confirmation requires at most one additional tap to finalize Given the response window has expired When the caregiver views the card Then the card shows "Offer expired" and no Accept/Decline actions are available
Timeboxed Responses and Ranked Cascade
Given a response window of N minutes is configured for the offer When the offer is sent to the top‑ranked caregiver Then a visible countdown timer shows the remaining time in the in‑app card And the Accept link expires precisely when the timer reaches zero Given the top‑ranked caregiver has not responded when the window expires When the system cascades to the next ranked caregiver Then the next caregiver receives the offer within 5 seconds and the first caregiver’s offer is marked expired And cascade events continue until a caregiver accepts or the candidate list is exhausted Given a caregiver explicitly declines before expiry When cascade runs Then that caregiver is excluded from subsequent rounds for this shift And all cascade steps are logged with timestamp, candidate ID, and rank for auditability
Parallel Soft‑Holds with First‑Come‑First‑Serve Locking
Given parallel soft‑holds are enabled for a shift with K concurrent candidates When offers are sent to K caregivers simultaneously Then each candidate sees the offer with a "Held" badge and informative text that assignment is first‑come‑first‑serve Given multiple caregivers attempt to accept within the window When the first valid Accept reaches the server and passes eligibility checks Then the shift is hard‑locked to that caregiver and all other candidates receive an "Unavailable" update within 3 seconds And late Accept attempts return a non‑error informational state with no assignment created And no more than one assignment can exist for the shift at any time Given the lock is created When a duplicate or retried Accept arrives from any candidate Then it is idempotently ignored and recorded in the audit log without altering the assignment
Auto‑Update of Schedule, EVV, and Compliance Notes
Given a caregiver’s Accept is confirmed and assigned When the assignment is locked Then the scheduler UI shows the caregiver on the shift within 2 seconds of server confirmation And the caregiver’s route is updated to include the visit with correct geolocation and ETA And the EVV system is pre‑authorized for the assigned caregiver and device per agency rules And compliance notes are auto‑generated with the assignment event (timestamp, actor, method: push/in‑app, soft‑hold/lock, rank) Given the assignment is later overridden by ops to a different caregiver When the override is saved Then EVV authorization and compliance notes are updated to reflect the new caregiver and the previous assignment is clearly marked as superseded And all updates are fully audit‑logged with before/after values
Ops Dashboard Review and Override
Given an operations user opens the Credential Swap dashboard for an at‑risk shift When offers are active Then the dashboard lists candidate caregivers with rank, distance, credential fit, authorization status, overtime impact, current response state, and time remaining And the ops user can cancel an active offer, extend the response window, or manually assign a caregiver with one action Given the ops user manually assigns a caregiver while offers are active When the assignment is saved Then all outstanding offers are immediately closed and notified as "Unavailable" within 3 seconds And the assignment respects authorization, credential, and overtime constraints with warnings for any rule overrides requiring explicit confirmation And all actions capture user, reason code, and timestamp for audit
Accessibility, Localization, and Offline‑Safe Queued Actions
Given a caregiver uses screen readers (VoiceOver/TalkBack) or large text When viewing the offer card Then all actionable elements are labeled, focusable in logical order, meet WCAG 2.1 AA contrast, and support Dynamic Type without clipping Given the caregiver’s device locale is supported (e.g., en, es) When the offer is displayed Then copy, date/time formats, currency, and pluralization are localized correctly Given the caregiver is offline when tapping Accept or Decline When the app cannot reach the server Then the action is queued locally with a signed payload and retried automatically upon connectivity And upon reconnect, if the shift is already assigned to someone else Then the caregiver receives a non‑blocking notice that the offer is no longer available and no duplicate assignment is created
Atomic Schedule & EVV Update
"As an operations manager, I want schedules, routes, EVV, and notes to update automatically after a swap so that there are no manual steps or compliance gaps."
Description

Upon acceptance, executes a transactional update that reassigns the visit, updates caregiver and client schedules, recalculates live routes, and enrolls the new caregiver in EVV for the visit. Auto-updates compliance notes with rationale and references to credentials and authorization checks. Performs idempotent writes with rollback on failure, emitting webhooks to billing, payroll, and payor portals when configured. Ensures all downstream systems reflect the swap within seconds to prevent check-in failures and documentation gaps.

Acceptance Criteria
Atomic reassignment and schedule update on swap acceptance
Given a scheduled visit assigned to Caregiver A and a pending swap to Caregiver B When Caregiver B accepts the swap Then the visit is reassigned to Caregiver B and removed from Caregiver A’s schedule in a single transaction And the client’s schedule reflects Caregiver B as the assigned caregiver And both caregiver schedules and the client schedule return the updated assignment via API and mobile within 2 seconds (p95) And if any sub-step fails, no schedules are changed and the user sees a failure message with a retry option
Live route recalculation after reassignment
Given optimized routes are active for both caregivers When the swap is accepted Then Caregiver B’s live route and ETAs are recalculated to include the visit and Caregiver A’s route is recalculated to exclude it And route versions increment and are returned by the Routes API within 5 seconds (p95) And the mobile map for both caregivers reflects the new routes on next sync (<5 seconds p95)
EVV enrollment and access control for reassigned visit
Given EVV is enabled for the organization When the swap is accepted Then Caregiver B is enrolled/authorized to check in/out for the visit and Caregiver A’s EVV access for that visit is revoked And the EVV vendor receives the updated assignment via integration within 5 seconds (p95) And Caregiver B can successfully initiate EVV check-in within 10 seconds of acceptance And no duplicate EVV records are created for the same visit
Compliance notes auto-update with rationale and references
Given credential validation and authorization checks are configured When the swap is accepted Then a compliance note is appended to the visit with substitution rationale, credential codes and expiration dates for Caregiver B, authorization ID/units, acceptance timestamp, actor, and correlation ID And the note is read-only and appears in audit-ready reports and the client record And if any required reference is missing, the swap fails atomically with no changes persisted
Idempotency and rollback guarantees
Given each swap accept request includes an Idempotency-Key When duplicate accept requests with the same key are received Then exactly one reassignment is committed and all responses reflect the same final state And if any part of the multi-system update fails (schedules, routes, EVV, compliance) Then all changes are rolled back and an error is returned with a correlation ID And no partial updates are visible in any downstream system
Webhook emission and delivery guarantees
Given billing, payroll, and payor webhooks are configured When the swap is accepted and committed Then a signed at-least-once webhook event (event_type=swap.accepted) is sent to each configured endpoint with idempotency key, visit ID, caregiver IDs, timestamps, authorization details, and overtime flag And failed deliveries are retried with exponential backoff for up to 24 hours with dead-letter logging And duplicate deliveries carry the same event_id and are ignored by idempotent consumers And no webhooks are emitted when no endpoints are configured
Propagation latency and readiness for check-in
Given a caregiver may attempt to check in immediately after acceptance When the swap is accepted Then all downstream surfaces (mobile schedules, Visits API, EVV vendor) reflect the new assignment within 3 seconds (p95) and 6 seconds (p99) And EVV check-in by Caregiver B succeeds on first attempt and EVV by Caregiver A is rejected with a clear message And monitoring captures propagation SLOs with alerts when p95 > 3s for 5 minutes
Credential & Authorization Guardrails
"As a scheduler, I want the system to block unqualified swaps and surface missing requirements so that we remain compliant with client and payor rules."
Description

Validates that substitutes meet all hard constraints before proposals are sent: active license/credential status and expiry, skill/competency match, client-specific restrictions, payor authorization coverage, background check status, and union or policy rules. Blocks noncompliant proposals and highlights remediable gaps with guided actions (e.g., upload missing document). Integrates with credential vault, compliance rules engine, and authorization records. Supports real-time rechecks at accept time to prevent race conditions.

Acceptance Criteria
Pre‑Proposal Hard‑Constraint Gate
Given a coordinator selects a substitute for a specific visit When they attempt to send a proposal Then the system evaluates all hard constraints (license/credential status and expiry, skill/competency, client restrictions, payor authorization, background check, union/policy rules) before any message is sent And if any hard constraint fails or cannot be verified, the Send action is blocked and a list of blocking reasons is displayed with severity "Block" And for each blocking reason with a known remediation, a guided action (e.g., Upload Document, Request Authorization) is available and launches the correct workflow And no schedule, EVV, or compliance notes are created or updated until all checks pass
Active License & Credential Expiry Check
Given the visit requires specific credentials When validating a substitute Then each required credential must be present, active, and unexpired on the visit start datetime, verified via the credential vault And if any required credential is missing, inactive, or expired as of the visit, the candidate is blocked with reason "Credential not active/expired" And if credential verification cannot be retrieved, the candidate is blocked with reason "Unable to verify credential"
Skill/Competency Match Enforcement
Given the visit specifies required skills/competencies and levels When validating a substitute Then the candidate must have all required skills at or above the required level effective on the visit date And if any required skill is missing or below the required level, block with reason "Skill/competency mismatch" And if skill data cannot be fetched or evaluated, block with reason "Unable to verify skill"
Payor Authorization Coverage Validation
Given the visit has a payer, service type, date/time window, and remaining authorized units When validating a substitute Then confirm the substitute is covered for the service type and time window with sufficient remaining units and valid date range via authorization records And if coverage is partial, exhausted, or out of date range, block with reason "Authorization not covered" and show CTA "Request extension/authorization" And if authorization data cannot be retrieved, block with reason "Unable to verify authorization"
Client‑Specific Restrictions Compliance
Given client hard restrictions (e.g., do‑not‑send list, gender requirement, language requirement, home access constraints) When validating a substitute Then any violation blocks the candidate with a specific reason matching the violated rule And client preferences marked as hard rules must be enforced; soft preferences do not affect blocking And if restriction data cannot be retrieved, block with reason "Unable to verify client restrictions"
Background Check & Union/Policy Rule Compliance
Given organizational policy and union contract rules apply When validating a substitute Then confirm a valid background check exists through the visit date and that all applicable policy/union constraints (e.g., max hours, overtime bans, seniority requirements) are satisfied And any violation blocks with a specific reason (e.g., "Background check expired", "Overtime rule violation") And if policy evaluation cannot be completed, block with reason "Unable to verify policy compliance"
Real‑Time Recheck at Accept with Race‑Condition Guard
Given a proposal previously passed checks and was sent When the substitute taps Accept Then re‑run all hard‑constraint validations against the latest data before confirming acceptance And if any check now fails or cannot be verified, block acceptance, withdraw the proposal, notify both parties, and do not update schedule/EVV/compliance notes And if all checks pass, confirm acceptance and update schedule, EVV, and compliance notes atomically
Cost & Compliance Impact Preview
"As an operations manager, I want to see the cost and compliance impact of a potential swap so that I can make cost-smart decisions without risking violations."
Description

Before sending proposals, displays estimated cost deltas (overtime risk, mileage, travel time) and compliance indicators (on-time likelihood, rest-period rules, max-hours thresholds). Provides policy-driven recommendations (e.g., “lowest cost within on-time SLA”) and allows admins to set guardrails that prevent sending high-risk proposals without override. Integrates with payroll rates, mileage rules, labor law constraints, and the ranking engine for a holistic decision view.

Acceptance Criteria
Itemized Cost Delta Calculation and Display
Given caregiver-specific base rate, overtime rules, mileage rate, travel time pay rules, visit duration, and origin/destination are known When the Cost & Compliance Impact Preview is opened for a candidate Then the system computes and displays: base wage cost, OT premium (if weekly hours including this visit exceed the configured threshold), travel time cost (if enabled), mileage reimbursement at the configured rate, applicable differentials, and the total estimated cost And then the system calculates and displays the total cost delta versus the currently assigned caregiver for the same visit And then all monetary values are rounded to 2 decimals and labeled with the configured currency And then each cost component shows the inputs used and a reference to the applicable rule/policy
Compliance Indicators: On-Time, Rest, Max-Hours
Given the caregiver’s schedule, planned route, EVV readiness, agency labor-law settings (rest period, max-hours), client authorization units, and on-time SLA are available When the preview is opened Then the system displays indicators for: On-time likelihood (High/Medium/Low with numeric probability), Rest-period compliance (pass/fail with required gap), Max-hours threshold (projected total vs limit), Authorization fit (in/out of auth units), and EVV enablement (ready/not ready) And then any failed or at-risk indicator is highlighted and includes the triggering rule and projected variance (e.g., -15 min rest gap, +2 hours over limit) And then if any mandatory compliance rule is failed, the proposal is marked High Risk
Policy-Driven Recommendation and Rationale
Given at least two eligible candidates and a selected policy (e.g., "Lowest cost within on-time SLA" or "Minimize overtime risk") When the preview loads or the policy is changed Then the recommended candidate is flagged, sorted first, and labeled with the policy name And then the rationale shows key metrics used (e.g., cost delta, on-time %, OT risk) and any trade-offs (e.g., +$3 vs cheapest but 98% on-time) And then changing the policy updates the recommendation and rationale within 1 second
Guardrails: Block High-Risk Proposals
Given configured guardrails (e.g., block if on-time likelihood < configured threshold, rest gap < configured hours, projected weekly hours > configured max, OT premium > configured amount) When the user attempts to send proposals and a candidate violates any guardrail Then the Send action is disabled for the violating candidate and a clear message lists the specific guardrail(s) failed And then candidates that pass all guardrails remain eligible to send And then guardrail definitions and thresholds are viewable via tooltip or link from the message
Admin Override with Justification and Audit Log
Given the user has Override Proposals permission and a candidate violates one or more guardrails When the user selects Override Then the system requires a justification (minimum 15 characters) before enabling Send And then upon sending, the system records user ID, timestamp, violated rules, preview snapshot (costs, indicators, policy), and resulting action in an immutable audit log And then the override is linked to the visit, caregiver, and client and appears in compliance reports filterable by date, user, and rule
Data Integration and Real-Time Recalculation
Given payroll rates, mileage rules, and labor-law constraints have last-updated timestamps and are retrievable When any relevant input changes (rate, schedule, distance, policy, guardrail, candidate selection) Then the preview recalculates all cost and compliance indicators within 2 seconds and refreshes displayed values And then the preview shows data freshness timestamps for each data source And then if any required data source is unavailable, the preview flags missing data and prevents Send until the data is restored or an admin override is provided
Ranking Engine Consistency and Proposal Eligibility
Given the ranking engine returns candidates with scores and attributes (proximity, credentials, client preferences, authorization fit, overtime impact) When the preview is opened Then the candidate order and scores match the ranking engine output for the same inputs And then applying a policy filter or guardrail reorders or filters candidates deterministically while preserving original ranking metadata And then only candidates passing guardrails are eligible for one-tap propose; attempts to send to ineligible candidates are blocked with the specific reason
Swap Audit Trail & Reporting
"As a compliance officer, I want a complete, exportable audit trail of each credential swap so that I can satisfy internal reviews and regulatory audits."
Description

Captures a tamper-evident trail of all swap events: risk detections, ranking snapshots, proposals sent, responses, approvals, assignments, and schedule/EVV updates with timestamps and user/system actors. Links each event to evidence (credentials, authorizations) and rationale. Provides one-click, audit-ready reports exportable to PDF/CSV and embeddable in CarePulse’s compliance reporting. Applies retention policies and access controls to protect PHI and meet HIPAA requirements.

Acceptance Criteria
Tamper-Evident Audit Log Integrity
Given a swap progresses through risk detection, ranking, proposal, response, approval, assignment, schedule update, and EVV update When each event is written to the audit log Then each event is stored as an append-only entry chained via a cryptographic hash to the previous event for the same swap_id And the event includes ISO 8601 UTC timestamp with millisecond precision, event_id, swap_id, event_type, actor_id, actor_type (user/system), and affected entity ids And invoking the integrity check for the swap_id returns status "OK" and the latest root hash when no tampering has occurred And if any prior event is altered or deleted, the integrity check returns status "FAIL" with the first invalid event_id and a security alert is emitted within 60 seconds
Complete Swap Event Capture
Given a swap is completed or canceled When querying the audit trail by swap_id Then the trail contains at least one event of each required type: RISK_DETECTED, RANKING_SNAPSHOT, PROPOSAL_SENT, RESPONSE_RECEIVED, APPROVAL_RECORDED, ASSIGNMENT_APPLIED, SCHEDULE_UPDATED, EVV_UPDATED And each event includes: timestamp (UTC), actor_id, actor_type, client_id, primary_caregiver_id, backup_caregiver_id (if applicable), rationale_id, and correlation ids And EVV_UPDATED events include vendor_ack_id, pre_value, post_value, and external_timestamp And events are ordered by timestamp and strictly increasing sequence number without gaps for the swap_id
Evidence and Rationale Snapshot Linking
Given a RANKING_SNAPSHOT or APPROVAL_RECORDED event exists When viewing the event details Then the event links to immutable evidence snapshots including caregiver credentials (issuer, credential_id, type, expiry), payer authorization (auth_id, coverage window, units remaining), and client preferences that influenced ranking And each link resolves to a versioned snapshot as of event time, even if the live source has changed And each snapshot hash recalculates to the stored hash value And attempts to delete or edit a referenced snapshot are blocked or result in a new version without altering the original
One-Click Audit Report Export (PDF/CSV)
Given a user selects a swap_id or date range and clicks Export When the export is initiated Then PDF and CSV files are generated within 5 seconds for up to 10,000 events and progress is displayed And the files contain all audit fields, maintain chronological order, and include a signature section with root hash, generation timestamp, requester id, and page count And CSV is UTF-8 with headers and RFC 4180 quoting; PDF text is searchable and passes PDF/A-2b validation And filenames include report type, swap_id or date range, and generation timestamp And AUDIT_EXPORT_REQUESTED and AUDIT_EXPORT_COMPLETED events are recorded with outcomes
Embedded Swap Audit in Compliance Reports
Given a user generates a Compliance Report with "Include Swap Audit Section" enabled When the report is rendered Then the embedded section shows swap counts, fill rate, mean time to fill, and compliance exceptions for the selected filters And each summary metric links to the detailed audit trail view for drill-down And totals and counts match the standalone audit export for the same filters And the section renders correctly in web and print outputs without truncation or overflow
Retention and Purge Policy (HIPAA-Aligned)
Given audit retention is configurable from 3 to 10 years with a default of 6 years When the scheduled purge job runs Then detailed audit entries older than the retention window are cryptographically tombstoned and removed from hot storage while preserving a non-PHI ledger of root hashes and counts And a PURGE_COMPLETED event is recorded with time range purged, item counts, and new root hash And requests for purged detailed entries return 410 Gone and are logged And integrity checks over remaining chains return status "OK"
Access Control and PHI Protections
Given a user attempts to access audit data or exports When the user has role Compliance Auditor, Admin, or Ops Manager with appropriate org and location scopes Then access is granted only to in-scope records; otherwise a 403 is returned and the attempt is logged And all views and exports show only minimum necessary PHI fields based on role And all accesses, denials, and exports are logged with user_id, purpose_of_use, IP, and timestamp And downloads are watermarked with requester identity and expire within 15 minutes And transport uses TLS 1.2+ and export files at rest are encrypted with AES-256 server-side keys

Geofence Snap

Builds smart geofences around entrances and client‑specific access points (gates, lobby doors, units) to counter GPS jitter. Automatically snaps arrival/leave events to the correct zone, preserving EVV accuracy on campuses and high‑rises. Cuts manual corrections and lowers audit risk.

Requirements

Snap-to-Zone Event Engine
"As a caregiver, I want my arrival and leave events automatically snapped to the correct access point zone despite GPS jitter so that my EVV logs are accurate without manual fixes."
Description

Implements the core snapping logic that converts noisy, time-sequenced location readings into accurate arrival and departure events aligned to the correct client-specific access point. The engine uses dwell-time thresholds, hysteresis, and conflict-resolution rules to prevent rapid in/out toggling and to select the best-matched zone when multiple geofences overlap. It runs on-device for low-latency decisions with a server-side validator for edge cases, ensuring EVV timestamps are precise and consistent across devices. It integrates with CarePulse schedules to bias snaps toward the assigned client during the scheduled window, and emits structured events that downstream documentation and compliance reporting consume. Expected outcomes include fewer manual corrections, higher EVV accuracy on campuses and high-rises, and reduced audit exposure.

Acceptance Criteria
Arrival and Departure Snap with Dwell and Hysteresis
Given device location readings at 1 Hz and a configured geofence zone When the device remains inside the zone boundary for ≥30 consecutive seconds with median horizontal accuracy ≤25 m Then emit exactly one arrival event with timestamp equal to the first instant the 30 s dwell condition becomes true And suppress additional arrival events while remaining in-zone And when subsequent readings are outside the zone boundary for ≥20 consecutive seconds with median accuracy ≤25 m and the last point is ≥15 m beyond the boundary Then emit exactly one departure event with timestamp equal to the first instant the 20 s exit dwell condition becomes true And do not emit a new arrival for the same zone until either ≥60 seconds have elapsed since the last departure or displacement ≥25 m from the boundary is observed (hysteresis)
Overlapping Geofences with Schedule Bias
Given two or more client access-point geofences overlap at the device position And there is an active schedule window for client A within [start−15 min, end+15 min] When the arrival dwell condition is satisfied within the overlapping area Then select client A's zone unless another candidate's centroid is >15 m closer than A's centroid and A's zone is not entered (no majority of in-zone samples) And if multiple candidates remain within 5 m centroid distance, select the smallest-area zone And if still tied, select the zone with highest configured priority, else the earliest scheduled start And record bias_applied=true/false and confidence in the emitted event
On-Device Decisioning and Server-Side Validation
Given a candidate event with a single qualifying zone and confidence ≥0.70 When the dwell threshold is satisfied Then the on-device engine emits the final event within ≤2 seconds Given a candidate event where the top two zones are within 8 m centroid distance and confidence difference <0.10 or GPS accuracy >50 m When evaluated Then the on-device engine marks the event pending and forwards for server-side validation And the server resolves the event (final zone and timestamp) within ≤10 seconds of receipt And in production telemetry over a rolling 7-day window, ≤1% of total events require server-side validation
Structured EVV Event Payload and Idempotent Delivery
Given an arrival or departure is emitted Then the payload contains non-null fields: event_id (UUIDv4), device_id, caregiver_id, schedule_id (if applicable), client_id, zone_id, event_type ∈ {arrival, departure}, occurred_at_utc (ISO 8601), emitted_by ∈ {on_device, server_validator}, horiz_accuracy_m, confidence ∈ [0,1], bias_applied (bool), method ∈ {snap, hard_geofence} And the payload validates against the published JSON Schema with no additional properties And duplicate submissions with the same event_id are ignored (idempotent) and do not create multiple records And per device per schedule, events are persisted in chronological order by occurred_at_utc And when offline, events are queued locally and transmitted within ≤60 seconds of network restoration, preserving occurred_at_utc for EVV
Cross-Device Timestamp and Zone Consistency
Given an identical labeled track is replayed on two different supported devices using the same configuration and clock source When the engine processes the track Then the selected zone_id for each event is identical across devices And the occurred_at_utc timestamps for corresponding events differ by ≤2 seconds And the confidence values differ by ≤0.05
Campus/High-Rise Jitter and Altitude Noise Robustness
Given a labeled test set of ≥1000 arrivals/departures across campuses and high-rises with 2D GPS jitter σ ≤30 m and vertical noise up to ±50 m When processed by the engine Then altitude is ignored in zone determination (2D snapping) And the correct zone is selected with ≥99.0% accuracy And rapid in/out toggling (false flaps) occurs in ≤0.5% of event windows And median absolute timestamp error relative to ground-truth labels is ≤5 seconds
Smart Geofence Builder
"As an operations manager, I want to quickly create and calibrate geofences around each client’s entrances and units so that visit events snap accurately in complex sites."
Description

Provides tools and APIs to create precise geofences around entrances, gates, lobby doors, and unit thresholds for each client location. Supports circles and polygons, configurable radii, anchor offsets, and metadata (access type, hours, door ID). Allows bulk creation from client addresses, imports from KML/GeoJSON, and calibration using historical caregiver traces and IoT sensor pings. Integrates with the location catalog in CarePulse so that schedules and EVV rules reference the same canonical geofences. Output geofences are versioned with change logs and safe-rollbacks to ensure continuity during audits.

Acceptance Criteria
Bulk Geofence Creation from Client Addresses
- Given a CSV with 1–5,000 client rows containing address, location_id, and access_type, When uploaded via UI or API, Then the system geocodes and creates circular geofences with a default 15 m radius for ≥98% of valid rows within 5 minutes and returns a summary of created/skipped/failed counts. - Given duplicate location_id rows or re-uploads with the same external_id, When processed, Then creation is idempotent and no duplicate geofences are created; existing geofences are left unchanged unless update=true is specified. - Given rows with invalid or ambiguous addresses, When processed, Then they fail with machine-readable error codes (e.g., GEOCODE_FAILED, AMBIGUOUS_ADDRESS) and a downloadable error report including row numbers. - Given a successfully created geofence, When saved, Then it is linked to the canonical location_id in the CarePulse location catalog and is retrievable via query by location_id within 60 seconds p95.
Manual Geofence Authoring with Circles, Polygons, and Anchor Offsets
- Given the map editor, When a user draws a circle or polygon with 3–50 vertices, Then validation prevents self-intersections and vertex snapping tolerance is ≤2 m. - Given a circle, When the radius is set between 5 m and 100 m, Then the persisted radius equals the input ±0.5 m, and the rendered area matches within 1 m Hausdorff distance. - Given an anchor offset (bearing in degrees and distance in meters), When configured, Then the snap point marker updates in real time and persists; subsequent reads return the exact bearing±1° and distance±0.5 m. - When saving a geofence, Then metadata fields access_type, hours (TZ-aware), and door_id are required and validated; save latency completes in <1.5 s p95 and returns the geofence_id and version.
KML/GeoJSON Import with Metadata Mapping
- Given a .kml or .geojson file up to 5 MB containing polygons and properties {door_id, access_type, hours, external_id}, When imported, Then all features are reprojected to WGS84 and created with geometry fidelity ≤1 m Hausdorff distance from source. - Given invalid geometries (self-intersecting polygons or malformed rings), When encountered, Then those features are rejected with error codes and line numbers while valid features continue; the import summary reports counts for created, updated, skipped, and failed. - Given features include external_id and update=true, When imported, Then existing geofences with matching external_id are updated in place with a new version; when update=false, they are skipped and reported as duplicates. - When the import completes, Then the operation is auditable with an import_id, actor, timestamp, and a downloadable detailed log.
Calibration from Historical Traces and IoT Sensor Pings
- Given ≥50 on-site historical GPS samples and ≥5 IoT door/sensor pings within the last 90 days for a location, When calibration is run, Then the suggested geofence encloses ≥95% of on-site samples and ≤2% of off-site samples from a 50 m buffer test set. - When the user previews the suggestion, Then the UI displays before/after geometry, precision/recall metrics, and sample counts; accepting creates a new active version, rejecting makes no change. - Given a calibration is accepted, When saved, Then a changelog entry records dataset IDs, metrics, actor, timestamp, and rationale; the new version becomes active within 10 seconds p95. - Given insufficient data, When calibration is attempted, Then the system aborts with CALIBRATION_INSUFFICIENT_DATA and provides minimum thresholds required.
Versioning, Change Logs, and Safe Rollback
- Given any saved edit, When applied, Then version increments (vMAJOR.MINOR.PATCH) and records actor, timestamp, change summary, and diffs for geometry and metadata in an immutable changelog. - Given an active version referenced by ongoing visits, When a newer version is activated, Then ongoing visits continue to resolve the prior active version until they end; new visits resolve the new version. - When a rollback to a prior version is initiated, Then the selected version becomes active within 10 seconds and is logged with actor, timestamp, and reason; no data loss occurs. - When exporting for audit, Then the system produces a report including full version history, diffs, and activation windows in CSV and JSON formats.
Location Catalog Integration and EVV Reference Consistency
- Given a schedule referencing a location_id, When EVV rules query for a geofence, Then the current active geofence version for that location_id is returned consistently via API and used for arrival/leave determinations. - Given a geofence is archived or pending deletion, When referenced by any active or future schedules, Then deletion is blocked or requires selecting a replacement geofence to maintain referential integrity. - When retrieving a location via API, Then the response includes geofence_version_id, metadata (access_type, hours, door_id), and ETag for caching; changes propagate to consumers within 60 seconds p95. - Given a stale client cache, When ETag mismatch is detected, Then clients obtain the latest geofence and version without manual intervention.
Signal Fusion Positioning
"As a caregiver working in high‑rises and campuses, I want the app to use building and nearby signals in addition to GPS so that my arrivals and departures are recognized even when GPS is unreliable."
Description

Enhances position reliability by fusing GPS with Wi‑Fi fingerprints, BLE beacons, cell-tower triangulation, motion sensors, and optional IoT door or gate sensors. Implements a scoring model to select the most reliable signal set per environment (e.g., high‑rise, underground garage) and produces a confidence value passed to the snapping engine. Degrades gracefully when some signals are unavailable and respects user privacy by processing fingerprints on-device and minimizing raw location retention. Integrates with CarePulse’s device SDK and optional IoT gateways to improve snap accuracy, especially indoors.

Acceptance Criteria
High-rise lobby: Wi-Fi/BLE preferred over weak GPS
Given the device is in a high-rise lobby with GPS accuracy worse than 25 m or HDOP > 5 and at least 3 known Wi-Fi fingerprints or 2 calibrated BLE beacons are detected When a fused position fix is requested Then the scoring model prioritizes Wi-Fi/BLE over GPS in the selected sourceSet And the reported accuracyMeters is <= 10 m against ground truth And confidence is >= 0.8 And sourceSet includes ["wifi","ble"] and does not mark "gps" as primary
Underground garage: cell + IMU fallback
Given GPS and Wi-Fi are unavailable and at least 3 cell towers are detected and motion sensors are enabled When the user moves at walking speed Then the system emits a position fix at least every 5 seconds And accuracyMeters is <= 50 m (median over 2 minutes) And confidence is between 0.3 and 0.6 And sourceSet includes ["cell","imu"] And no empty or null fixes are emitted
Campus entry: fuse IoT gate event with proximity
Given an IoT gate sensor emits an open event at timestamp T and the device is within 15 m of the gate beacon within ±10 s of T When the fused position fix is computed Then the scorer applies an IoT-based boost to the entrance zone and includes "iot" in sourceSet And the position is within 10 m of the gate location And confidence is >= 0.9 And the confidence value is available to the snapping engine within 200 ms via the SDK
Signal dropout: graceful degradation and continuity
Given active tracking and a progressive loss of signals (GPS off, Wi-Fi disabled, BLE out of range) When only the last-known fix remains Then the system retains the last-known fix for up to 60 seconds while decaying confidence by at least 0.1 every 15 seconds down to a floor of 0.2 And it will not switch zones across geofences more than 30 m apart without corroborating signals And after 60 seconds without any signals, a FixUnavailable state is emitted with an error code
Privacy: on-device fingerprinting and minimal retention
Given the app performs Wi-Fi and BLE scans When fingerprints are processed Then raw BSSIDs/MACs are processed on-device and are not transmitted off-device And raw scan records are retained no longer than 15 minutes or until aggregated fingerprints are computed, whichever is sooner And only hashed fingerprint IDs and model weights are persisted or synced And a privacy audit log shows zero outbound transmissions of raw identifiers during tests
Scoring model: environment-based selection and accuracy
Given environment profiles: outdoor-open, high-rise-indoor, underground with representative replay datasets When the datasets are processed by the fusion engine Then the selected sourceSet per profile is outdoor: ["gps","wifi"], high-rise: ["wifi","ble"], underground: ["cell","imu"] And median horizontal error is <= 12 m (outdoor), <= 8 m (high-rise), <= 40 m (underground) And Spearman correlation between (1 - confidence) and absolute error is >= 0.6 And profile thresholds are configurable via remote config and take effect within 10 minutes of update
SDK contract: fix schema and confidence delivery
Given a client app calls SDK method getPositionFix() When a fix is returned Then the payload includes latitude, longitude, accuracyMeters, confidence (0..1), timestamp (ISO-8601), sourceSet (array), and privacyFlags And the confidence value is forwarded to the snapping engine within 200 ms of fix time And p99 time-to-first-fix is <= 8 seconds with at least one signal source available And a documented error code is returned when no signals are available
Hierarchical Geofence Support
"As an operations manager, I want hierarchical geofences with clear precedence so that events snap to the correct unit within a campus or high‑rise."
Description

Supports nested and overlapping geofences with precedence rules (campus → building → floor → unit) and vertical awareness using barometer and Wi‑Fi cues. Defines tie-breakers such as smallest-area-wins, proximity-to-anchor, and schedule affinity to ensure snaps resolve to the correct unit within dense complexes. Exposes hierarchy in both mobile SDK and admin tools so downstream EVV and reporting can attribute time to the correct client and sub-location. Includes safeguards to prevent cross-client contamination when adjacent tenants share walls or entrances.

Acceptance Criteria
High-Rise Arrival: Snap to Correct Unit Using Vertical Signals and Schedule Affinity
Given hierarchical geofences are configured for campus C > building B > floor F7 > unit U712 with doorway anchors and vertical signals And a visit is scheduled for client X at U712 within the next 30 minutes And the device has barometer and Wi‑Fi enabled while GPS accuracy is worse than 50 m When the caregiver enters building B and crosses the doorway anchor of unit U712 Then the system snaps the arrival to unit U712 (not campus/building/floor) And the event timestamp is within 10 seconds of the doorway crossing And the EVV record includes hierarchyPath=C>B>F7>U712, clientId=X, and confidence ≥ 0.80 And no manual correction flag is set
Overlapping Geofences: Deterministic Precedence and Tie-Breaker Ordering
Rule order: 1) Schedule affinity: If within ±45 minutes of a scheduled visit and the scheduled unit’s geofence contains the point or is within 50 m, prefer the scheduled unit 2) Smallest-area-wins: If multiple candidates remain, select the smallest-area geofence 3) Proximity-to-anchor: If still tied, select the unit whose nearest anchor is closest (≤ 8 m) 4) Vertical likelihood: Prefer the unit whose floor estimate matches barometer/Wi‑Fi-derived floor within ±1 floor 5) Hysteresis: Maintain the current selection unless a different unit is favored for ≥ 15 s Determinism: - Identical inputs yield identical selections; each decision includes a reason code reflecting the top winning rule Stability: - Selection does not oscillate under GPS jitter up to 30 m HDOP during a 60 s window
Shared Entrances: Prevent Cross-Client Contamination Between Adjacent Units
Given adjacent units U711 (client A) and U712 (client B) share a hallway and wall with anchors 3 m apart And the caregiver is scheduled for client B at U712 When the device is within the shared hallway area Then the system must not attribute to client A unless the U711 doorway anchor is crossed and schedule affinity for U712 is not active And a 2 m guard zone around shared boundaries prevents attribution change unless dwell time ≥ 20 s inside the new unit And Wi‑Fi whitelist and RSSI delta (≥ 6 dB stronger) must support any switch between adjacent units And EVV records never contain clientId=A for this session unless above conditions are met And a contamination alert is logged if conflicting signals persist ≥ 60 s without changing attribution
SDK Payload: Emit Hierarchical Attribution and Decision Trace
When the SDK emits an arrival or leave event Then the payload includes: eventId, timestampUTC, clientId, hierarchyPath [campusId, buildingId, floorId, unitId], geofenceIds, decisionTrace (ordered rules and outcomes), sensorsUsed, verticalEstimate {floor, method, accuracyMeters}, confidence [0..1], snapLatencyMs, appVersion, sdkVersion, deviceModel And the payload validates against JSON schema version 1.2 And if vertical signals are unavailable, verticalEstimate.method="gps-only" and confidence ≤ 0.60 And the SDK emits within 5 s of the snap; the server receives within 30 s or marks as queuedOffline=true
Admin Tools: Visualize and Edit Hierarchy Without Breaking EVV Attribution
Given an admin edits hierarchy or anchors in the Geofence editor When changes are saved Then a new hierarchy version is created and published atomically And historical EVV records remain attributed to the prior version; future snaps use the new version And the admin can run a 15-minute playback with sensor breadcrumbs and snaps overlay And an audit CSV export for 5,000 records completes in ≤ 10 s and includes decisionTrace and hierarchyPath And only users with role=Ops Admin can edit; others have read-only access
Offline Capture: Deferred Snapping with Sensor Trace and Idempotent Replay
Given the device is offline for up to 2 hours When arrival and leave events occur Then the SDK records sensor traces (barometer 1 Hz, Wi‑Fi scans 0.2 Hz, GPS as available) with timestamps And on reconnection, the SDK computes snaps locally with the same rules and uploads in order And event timestamps reflect boundary crossings, not upload time And duplicate uploads are ignored via idempotencyKey And total storage used per 8-hour shift ≤ 25 MB; if exceeded, oldest non-critical traces are pruned post-finalization
Performance and Battery: Low-Latency Snapping Under Resource Constraints
Latency: - Snap decision P50 ≤ 2 s; P95 ≤ 5 s from boundary crossing Resource usage: - Average CPU per decision ≤ 50 ms on mid-tier Android; background energy overhead ≤ 3% per 8-hour shift Scalability: - Support evaluation of up to 10,000 geofences per campus with RAM < 50 MB and zero missed enter/exit events in test runs Throughput: - Server-side replay processes ≥ 100 decisions/s Configurability: - All thresholds are remotely configurable and applied within 10 minutes of change
Offline Capture & Sync
"As a caregiver with spotty coverage, I want arrival and leave events to be captured and snapped offline so that my EVV remains complete and compliant."
Description

Enables local caching of geofences and on-device snap decisions when the device is offline or experiencing poor connectivity. Buffers raw signal samples and snapped events securely on the device, resolves conflicts when connectivity returns, and reconciles with the server to maintain a single source of truth. Implements battery-aware sampling profiles and backoff strategies to preserve device battery while maintaining EVV compliance. Ensures no data loss and accurate timestamps that align with compliance reporting requirements.

Acceptance Criteria
Offline Geofence Cache Availability
- Given a caregiver has launched the app and completed a successful sync within the last 72 hours When the device is offline Then the app shall provide access to all geofences for the caregiver’s scheduled clients for the next 7 calendar days and any geofences visited in the last 30 days - Given geofence definitions are updated on the server while the device is offline When the device reconnects Then the app shall refresh the local cache within 30 seconds and mark any superseded local geofences as stale and no longer used for snap decisions - Given local cache storage constraints When the number of cached geofences exceeds limits Then the app shall evict least‑recently‑used geofences not scheduled in the next 7 days, without removing any geofences scheduled within the next 7 days - Given a device has never synced When it is offline Then snap decisions shall be disabled and the UI shall display an offline‑cache‑missing state without blocking other app functionality
On-Device Snap Decisions Without Connectivity
- Given the device is offline and the caregiver enters a geofence boundary When the median of the last 5 location fixes (from GPS/BLE/Wi‑Fi) falls within the geofence polygon for at least 10 consecutive seconds Then an offline arrival event shall be created with snap_source=on_device and accuracy metadata (hdop, provider, samples_used) - Given the device is offline and the caregiver leaves a geofence boundary When the median of the last 5 location fixes falls outside the polygon by at least 15 meters for 15 consecutive seconds Then an offline leave event shall be created - Given GPS jitter near zone edges When conflicting in/out samples occur within 10 seconds Then hysteresis rules shall favor the prior state unless a 10‑second dwell threshold is met to transition - Given multiple overlapping client access points (e.g., gate vs unit) When arrival is detected Then the event shall snap to the smallest‑area polygon that contains the median fix and is associated with the current visit’s client
Secure Buffering of Raw Signals and Events
- Given the device is offline When location and sensor samples are captured Then raw samples shall be buffered at a minimum of 1 Hz near active geofences and 0.1 Hz otherwise, up to at least 10,000 samples or 30 days, whichever comes first - Given any offline arrival/leave event is created When it is stored locally Then it shall be encrypted at rest using platform keystore–protected AES‑256‑GCM with per‑record IVs and integrity‑checked via HMAC, and retrievable only by the app while the device is unlocked - Given storage limits are approached (90% of allocated offline buffer) When additional samples/events are captured Then oldest raw samples (not events) shall be dropped first; no snapped events shall be dropped until successfully synced and acknowledged - Given a user logs out When local data is cleared Then unsynced events and their raw samples shall not be deleted and shall remain queued for sync on next authorized login; the logout UI shall warn about pending unsynced data
Conflict Resolution on Reconnect
- Given offline arrival/leave events exist for a visit and the server also has events for the same visit When the device reconnects and syncs Then the resolver shall deduplicate events using event UUID and zone_id, keeping at most one arrival and one leave per visit per zone - Given two candidate arrival events for the same visit and zone within a 2‑minute window When resolving Then the earliest timestamp (after clock‑offset correction) shall be kept and the other marked as duplicate - Given arrival and leave events overlap (arrival after leave due to clock drift) When resolving Then server‑provided time offset shall be applied; if still overlapping, adjust the leave to be ≥ arrival + 1 second and flag as corrected - Given events for overlapping zones (gate and unit) for the same visit When resolving Then precedence shall be unit > lobby > gate; lower‑precedence events shall be retained in audit trail but excluded from compliance summaries
Server Reconciliation and Idempotent Sync
- Given the device has N unsynced events and M raw sample batches When connectivity is restored Then the client shall POST them in chronological order with idempotency keys (UUIDv4) and retry using exponential backoff from 1s to 60s until 200/201 is received - Given the server successfully persists an item When it returns an acknowledgment with the item’s UUID Then the client shall mark the item as synced and shall not resend it; resubmission attempts with the same UUID shall receive 200 and produce no duplicates - Given sync completes When the client fetches the visit timeline Then exactly one canonical arrival and one leave per visit shall be returned, with offline_correction flags as applicable, within 30 seconds of connectivity restoration - Given a sync attempt fails permanently (HTTP 4xx other than 409) for an item When the client retries the next backoff window Then the item shall not be retried more than 5 times and shall surface a user‑visible error with a recovery suggestion, while preserving the item for support export
Battery-Aware Sampling and Backoff
- Given the caregiver is more than 500 meters from any scheduled geofence and the device is stationary for 60 seconds When sampling policy is applied Then location sampling interval shall back off to ≥ 60 seconds and radio scans shall be limited to ≤ 1 per minute - Given the caregiver is within 200 meters of an active visit geofence or motion is detected When sampling policy is applied Then sampling interval shall increase to ≤ 5 seconds and BLE/Wi‑Fi scans may occur up to 6 per minute - Given device battery level falls below 15% When sampling policy is applied Then low‑power mode shall cap sampling at ≤ 15 seconds near geofences and ≤ 90 seconds otherwise, while still meeting snap decision thresholds - Given a 60‑minute route with 2 visits under typical urban conditions When measured on a reference device Then additional battery drain attributable to Geofence Snap offline capture shall be ≤ 3% per hour
Timestamp Integrity and Compliance Alignment
- Given the device is offline When an event is created Then the event shall record device_time (UTC), device_timezone, and a monotonic clock reading - Given the device reconnects When server time offset is obtained Then all offline event timestamps shall be corrected by the calculated offset; the corrected_time shall be within ±30 seconds of server time for compliance and the original device_time shall be preserved immutably - Given daylight saving or timezone changes occur between event capture and sync When events are displayed or exported Then stored times shall be in UTC with local‑time rendering based on event‑location timezone; no duplicate or missing hour shall be introduced in reports - Given any correction causes arrival to occur after leave for the same visit When validating sequence Then the system shall enforce arrival_time ≤ leave_time and emit a correction flag with reason=sequence_adjustment
Admin Mapping & QA Console
"As an operations manager, I want a console to create, test, and audit geofences so that I can reduce corrections and lower audit risk."
Description

Provides a web-based console for authorized staff to visualize geofences on maps and building overlays, create and edit zones, simulate route playback, and run what-if tests against historical traces to validate snapping outcomes. Includes quality checks for geofence overlap, drift detection, and suggested calibrations. Offers role-based access, version history, and staged publishing to limit production risk. Integrates with CarePulse’s client/location directory and reporting so changes are auditable and reflected in EVV summaries and compliance exports.

Acceptance Criteria
Visualize and Draw Zones with Building Overlays
Given a user with the Geofence Editor role is authenticated When they open the Admin Mapping & QA Console for a specific client location Then the map and building overlay for the selected location load within 2 seconds (95th percentile) Given the building overlay is visible When the user toggles floor/level controls Then the overlay switches floors and preserves geospatial alignment within ±3 meters of known anchor points Given the map is loaded When the user draws a circle or polygon zone and assigns an access point and floor/level Then the UI enforces a minimum radius of 5m for circles and 3–50 vertices for polygons, and displays area in m² Given a valid zone is drawn When the user clicks Save Then a Draft zone is created with a unique ID, visible on the map with a “Draft” badge and default snap priority
Edit Zone Properties and Snap Priorities
Given an existing Draft zone is selected When the user edits geometry, radius, floor, label, and snap priority weight Then the changes are validated client-side and server-side and saved persistently within 1 second Given two zones share the same snap priority weight within the same access point group When the user saves Then the system automatically resolves ordering deterministically (by weight then createdAt) and displays the final order Given an edit would reduce a polygon below 3 vertices or create self-intersection When the user attempts to save Then the save is blocked and the user is shown a specific validation message
Run What‑If Simulation on Historical Traces
Given a user selects a caregiver, date range, and zone version (Draft/Staging/Production) When they run a What‑If simulation Then the system replays GPS traces and displays snapped arrival/leave events and a diff versus Production without modifying Production data Given the simulation has N ≤ 10,000 GPS points When executed Then the simulation completes within 8 seconds (95th percentile) and returns metrics: visits affected, snaps changed, mean snap error (m), and false positives/negatives Given the user selects “Apply suggested calibration in Draft” When the simulation indicates improved accuracy (mean snap error reduced by ≥10%) Then the system applies suggestions to the Draft and records the change set
Automated Quality Checks: Overlap and Drift Detection
Given two zones overlap such that intersection area >10% of the smaller zone When running quality checks Then the system flags a Blocking Overlap with zone IDs, overlap area (m²), and prevents publishing until resolved or an override with justification is recorded by an Admin Given drift over the last 30 visits at a location has median offset >8 meters When quality checks run nightly Then the system creates a Suggested Calibration with recommended radius/center adjustment and confidence score Given a user accepts a Suggested Calibration When applied Then a new Draft version is created with linked suggestion ID, and the suggestion is marked Applied with user, timestamp, and before/after values
Role‑Based Access and Permissions
Given a user with Read‑Only role accesses the console When viewing zones Then they can view maps, overlays, simulations, and reports but cannot create, edit, publish, or delete zones (controls disabled) Given a user without Publish permission attempts to publish When they invoke the Publish action via UI or API Then the action is denied with HTTP 403 and an inline message “Insufficient permissions: Publish” Given an Admin user performs any create/edit/delete/publish action When completed Then the action is logged with user ID, role, IP, timestamp, object ID, and before/after diff
Versioning, Draft Review, and Staged Publishing
Given a Draft exists When a Reviewer opens Version History Then they can compare Draft vs last Published with geometry, properties, and metrics diffs Given a user publishes to Staging When confirmed Then the Staging environment uses the new version for simulations within 1 minute, and Production remains unchanged Given a user schedules a Production publish for a future time When the schedule time is reached Then the version is promoted to Production atomically, and rollback to the previous version is available within 1 click and completes within 1 minute
Audit Trail and EVV Recalculation in Reports
Given a version is published to Production with effectiveFrom and optional effectiveTo When publishing completes Then the system recalculates EVV summaries for impacted visits within that window and marks updated visits with a recalculation tag Given compliance exports are generated after a publish When an export is requested Then the export includes the updated snapped events and version ID, and is available within 5 minutes for datasets ≤ 100k records Given an auditor requests change history for a location When the Audit API is called Then it returns a chronological log of zone changes with who/when/what and links to associated reports
Audit Trail with Confidence & Overrides
"As a compliance coordinator, I want transparent audit trails and safe overrides so that we can pass audits and resolve EVV disputes quickly."
Description

Captures a tamper-evident log of raw signals, snap decisions, confidence scores, and reason codes for each arrival/leave event. Presents a side-by-side timeline of raw vs. snapped events in CarePulse, with permissioned manual override that requires justification and preserves the original record for audit. Exposes exportable, audit-ready reports with redaction controls and retention policies aligned with EVV regulations. Integrates with dispute resolution workflows to quickly substantiate visit accuracy.

Acceptance Criteria
Tamper-Evident Capture of Raw and Snap Decisions
Given the geofence snap engine detects a caregiver arrival or leave event When the event is persisted Then the system stores raw signals within ±120 seconds of the event including signal type (GPS/BLE/Wi‑Fi/IoT), timestamp (UTC), coordinates, and accuracy metadata And the system stores the snap decision including event type (arrival/leave), geofence ID/name, algorithm version, confidence score (0–100), reason code from the configured list, and processing latency in milliseconds And entries are written to an immutable audit store where any change creates a new version and links to the prior via hash while preserving the original And attempts to delete or alter audit entries by non-system actors are blocked and logged with actor, action, timestamp, and outcome And each entry includes actor (system/user ID), source (device/app version or service), and createdAt (UTC) And retrieving the audit trail by visit ID returns results in ≤500 ms at p95 for up to 200 events
Side-by-Side Timeline View in CarePulse
Given a user with Compliance or Ops Manager role opens a Visit and navigates to the Audit tab When the audit timeline loads Then two synchronized tracks labeled Raw and Snapped display all arrival/leave events aligned by time And each snapped event shows confidence score, reason code, geofence ID/name, and event source (auto/manual) And discrepancies greater than 2 minutes or 30 meters between raw and snapped are visually highlighted And selecting an event reveals a details panel with the raw sample list, decision inputs, algorithm version, and hashes And users can filter by signal type (GPS/BLE/Wi‑Fi/IoT) and by confidence range And timestamps render in agency local time with UTC on hover, and ordering is stable across refreshes
Permissioned Manual Override with Justification
Given a user with Compliance role views a snapped arrival/leave event When the user initiates an Override Then the system requires selection of a reason code (Correction, Client Confirmed, Device Issue, Other) and a free-text justification of at least 20 characters And the original event remains immutable while a new override record is appended linking to the original with fields: new value, actor, timestamp, reason code, justification, and prior-hash And operational views use the overridden value while the Audit tab shows both values clearly labeled And overrides on visits marked Locked/Exported are blocked with an error explaining the lock condition And the user can Revert Override, which appends a reversal record and restores the snapped value in operational views And an alert is sent to the Ops Manager channel and the dispute (if present) is updated with the override entry
Exportable Audit-Ready Reports with Redaction Controls
Given a Compliance user selects visits and chooses Export Audit When the user selects format (PDF/CSV/JSON) and redaction level (Full, Limited, None) Then the exported file includes visit IDs, caregiver and client identifiers, raw signal summaries, snapped events, confidence scores, reason codes, override history, and retention status And redaction levels behave as follows: Full removes exact coordinates and device IDs; Limited rounds coordinates to 3 decimals and masks device IDs except last 4; None includes all fields And the export footer includes generation timestamp (UTC), requesting user, and a checksum/hash of contents And exports for up to 100 visits complete within 30 seconds or transition to background processing with user notification upon completion And the export action is written to the audit trail with parameters, status, and download timestamp
Retention Policy Enforcement and Legal Hold
Given an Agency Admin configures an EVV data retention policy in Settings When the policy is saved with a duration and effective date Then each visit displays its calculated purge eligibility date based on the policy and agency timezone And a daily purge job runs at 02:00 local time to delete or archive audit data older than the policy, creating a tombstone record with visit ID, purge time, and policy version And users with Compliance role can place a Legal Hold on specific visits/cases with reason and optional expiry to suspend purge And data under Legal Hold is excluded from purge and labeled in UI and exports And retention policy changes are versioned, auditable, and applied only to new purge cycles (no retroactive deletion within a hold window)
Dispute Resolution Workflow Integration
Given a visit is marked Disputed in CarePulse or via API When a Compliance user attaches the audit packet Then the system generates a secure, time-limited link and/or posts the packet via webhook to the configured dispute system And the packet includes side-by-side timeline, confidence scores, reason codes, overrides, redaction per selected level, and export checksum And dispute status is tracked as Received, Under Review, or Resolved with timestamps and actor And evidence additions and status changes write entries to the visit audit trail And packet delivery succeeds within ≤5 seconds at p95 with retries on failure using exponential backoff and user-visible error on exhaustion

Last-Mile Guide

Injects door codes, parking notes, floor/room details, and prior visit photo pins into turn‑by‑turn directions. Provides hands‑free audio cues for the final 300 yards to reduce wandering. Saves minutes per visit and trims drift in unfamiliar or complex locations.

Requirements

Secure Access Detail Injection
"As a caregiver, I want door codes and entry notes to auto-appear as I arrive so that I can access the client’s home quickly and safely without searching through notes."
Description

Securely stores and injects location-specific access details such as door codes, gate/callbox instructions, parking restrictions, building/floor/room numbers, and special entry notes directly into the final segment of turn-by-turn navigation. Details are tied to the visit location and time-bound, encrypted at rest and in transit, with role-based access and full audit trail. On approach within a configurable radius (default 300 yards), the app surfaces relevant snippets via overlay and text-to-speech, with redaction on screenshots and notification previews. Supports per-client field templates, expiration and rotation rules, and prompts to capture missing details post-visit. Integrates with client profiles, scheduling, and route optimization; ingests from visit notes and syncs usage telemetry.

Acceptance Criteria
On-Approach Overlay and TTS at Final Segment
Given an assigned caregiver is actively navigating to the scheduled visit location during the visit window And the visit location has stored access details (e.g., door code, parking notes, floor/room) When the device enters the final-approach geofence for that stop Then the app surfaces an on-screen overlay with the relevant access details for that location And the app plays a text-to-speech summary of the relevant access details And injection only occurs when the navigation destination matches the scheduled visit’s geocoded location within the accepted threshold And a usage telemetry event is recorded for overlay display and TTS playback
Configurable Approach Radius with Default
Given the system default approach radius is 300 yards When an admin leaves the setting unchanged Then the overlay and TTS trigger at 300 yards on approach Given an admin configures a client- or agency-level approach radius override (e.g., 150 yards) When the caregiver approaches that client’s location while navigating to the scheduled visit Then the overlay and TTS trigger at the configured override distance And other clients without an override continue to use the 300-yard default
Role-Based Access and Time-Bound Visibility with Audit Trail
Given role-based permissions are configured for caregivers and operations staff And access details are tied to a specific visit and client location When the assigned caregiver attempts to view access details within the scheduled visit window Then access is granted And the access event (user, role, visit, timestamp, action=read) is written to the audit trail Given a user without the required role attempts to view access details at any time When they request access in the app or via API Then access is denied And the denial (user, role, visit, timestamp, action=denied) is written to the audit trail Given the scheduled visit window has not started or has ended for the caregiver When the caregiver attempts to view access details Then access is denied with a time-bound message And the denial is recorded in the audit trail
Encryption and Secure Transport for Access Details
Given access details are stored in the platform When data is persisted at rest Then the records are encrypted using industry-standard strong encryption (e.g., AES-256) with managed keys Given access details are transmitted between client apps and backend services When API calls occur Then transport uses TLS 1.2+ with certificate pinning or equivalent verification And requests containing access details succeed only over secure channels Given backups and logs that may contain metadata about access details When those artifacts are generated Then no sensitive field values are stored in plaintext And keys and secrets are not logged
Screenshot and Notification Redaction of Sensitive Fields
Given the overlay with access details is visible When the user triggers an OS screenshot, app switcher preview, or screen recording Then sensitive fields (e.g., door codes, gate codes) are masked or omitted in captured images/previews Given the system sends a notification about access details being available When the notification appears on the lock screen or banner Then the notification contains no sensitive field values and uses a generic message And tapping the notification requires authentication before revealing details
Expiration and Rotation Rule Enforcement
Given an access detail field has an expiration date/time configured When the current time is past the expiration Then the expired field is not surfaced in overlays or TTS And the UI indicates the field is expired And the caregiver is prompted post-visit to request or capture an updated value Given a rotation rule is configured for a field (e.g., rotate after N days or uses) When the rotation condition is met Then the field is flagged for rotation and not surfaced until updated And a task or prompt is created for authorized staff to rotate the value And the events are captured in the audit trail
Per-Client Templates, Ingestion, and Post-Visit Capture
Given a client-specific access detail template defines required and optional fields When a visit is scheduled for that client Then the capture and display forms reflect the template’s fields and required/optional statuses Given prior visit notes contain access details that match template fields When the system ingests visit notes Then matching fields are parsed and populated into the client’s access details for review by authorized staff before activation Given required template fields are missing at the time of visit completion When the caregiver ends the visit Then the app prompts the caregiver to capture missing details (with role checks) and routes submissions for approval as configured And a telemetry event is recorded for missing-detail prompt and submission
Context-Aware Micro-Directions
"As a caregiver, I want precise cues for the final stretch so that I stop wandering and arrive at the correct entrance faster."
Description

Generates a geofenced last-300-yard guidance layer that provides adaptive stepwise audio cues, haptic prompts, and on-screen arrows based on approach direction, street side, parking lot aisles, common entrances, and floor transitions. When GPS accuracy degrades, switches to fused location (GNSS, Wi‑Fi, cellular) and optionally leverages indoor wayfinding beacons or QR markers where available. Provides offline fallback using pre-fetched map tiles and cached points of interest. Delivers precise prompts such as “Use garage entrance on the second driveway; keypad on left.” Cue cadence and verbosity are configurable, and the overlay works alongside third-party navigation via deep link or in-app map.

Acceptance Criteria
Auto-Activate Last-300-Yard Guidance Geofence
Given a caregiver is navigating to an active visit destination, When the device’s computed distance to the destination first becomes ≤ 275 meters, Then the last‑mile guidance layer activates within 2 seconds and starts providing cues. Given last‑mile guidance is active, When the caregiver completes the visit check-in or the device’s distance becomes > 350 meters for ≥ 10 seconds, Then the last‑mile guidance layer deactivates. Given multiple visits are on today’s route, When entering the 275 m radius of one destination, Then only the guidance for the nearest active destination activates (no duplicate sessions).
Adaptive Cues by Approach Direction and Entrance Selection
Given the destination has mapped entrances, parking aisles, and street-side metadata (including prior-visit photo pins marked as “common entrance”), When the caregiver approaches from different bearings or lot aisles, Then the first two micro-prompts reference the correct entrance and aisle relative to the approach vector (e.g., “use north entrance from aisle C”). Given multiple entrances exist and one is flagged as common/preferred for this location, When within 275 m, Then cues route to the preferred entrance; if no preferred entrance is present, cues route to the primary/main entrance. Given the selected entrance is off the caregiver’s current side of the street, When safe cross-over points exist within 150 m, Then prompts include a cross-over instruction before entrance guidance.
Multi-Modal Prompts and Configurable Cadence/Verbosity
Given user settings for cue cadence (Dense, Normal, Sparse) and verbosity (Brief, Standard, Detailed), When last‑mile guidance activates, Then audio TTS, haptic pulses, and on‑screen arrows are delivered according to the selected cadence and verbosity. Given cadence is set to Sparse and verbosity to Brief, When within the last 275 m, Then cues occur no more than once every 30 meters or 10 seconds, whichever is greater. Given cadence is set to Dense and verbosity to Detailed, When within the last 275 m, Then cues occur at least once every 15 meters or 5 seconds, whichever is sooner, and include extended details. Given the device screen is locked or app is backgrounded, When last‑mile guidance is active, Then audio and haptic cues continue hands‑free, and on‑screen arrows resume when the app returns to foreground.
Location Accuracy Fallback and Indoor Aids
Given GNSS horizontal accuracy reported by the OS is > 30 meters for ≥ 5 seconds while within the last 275 m, When this condition is met, Then the system switches to fused location (GNSS + Wi‑Fi + cellular) and indicates the mode change in the UI within 1 second. Given BLE indoor beacons or a QR wayfinding marker is detected within 20 meters while accuracy > 20 meters, When detected, Then the system enters indoor mode and updates cues using indoor anchors (e.g., lobby, elevator bank), with visual confirmation. Given fused/indoor mode is active, When GNSS accuracy improves to ≤ 15 meters for 10 seconds, Then the system reverts to GNSS‑first mode and updates the indicator accordingly.
Offline Guidance with Prefetched Tiles and Cached POIs
Given a visit is added to Today’s route or the user taps Start for that visit, When the device is online, Then the app prefetches map tiles covering a 1 km x 1 km area centered on the destination and caches POIs (entrances, aisles, floor transitions) within 500 meters. Given connectivity is lost (no internet) within the last 275 m, When last‑mile guidance is active, Then on‑screen arrows, audio/haptic prompts, and POI references continue using cached data, and an Offline badge appears. Given offline mode is active, When guidance requires a POI that was not cached, Then the system omits that reference without crashing and continues with available cues.
Third‑Party Navigation Deep Link with In‑App Overlay
Given a caregiver selects Open in Maps from the route screen, When the app deep‑links to the chosen provider (e.g., Google Maps or Apple Maps), Then macro navigation starts in the third‑party app and CarePulse continues last‑mile cues in the final 275 m via audio/haptics notifications. Given the caregiver remains in the CarePulse in‑app map instead of deep‑linking, When within the last 275 m, Then the micro‑direction overlay (arrows and prompts) renders atop the in‑app map without obstructing the next turn indicator. Given third‑party navigation is active and CarePulse plays a cue, When both apps produce audio, Then CarePulse ducks its TTS to avoid masking third‑party instructions.
Precise Micro‑Prompt Content and Contextual Details
Given entrance metadata includes entrance type, driveway ordinal, and keypad side, When generating the initial decisive micro‑prompt within 275 m, Then the prompt string includes at least entrance type and keypad side, and includes driveway ordinal if present (e.g., “Use garage entrance on the second driveway; keypad on left.”). Given TTS is enabled, When a decisive prompt is generated, Then the TTS is spoken within 1.5 seconds of trigger and matches the on‑screen text. Given the user is within 10 meters of the decision point, When a decisive action is required (e.g., select garage entrance), Then a distinct double‑pulse haptic pattern is emitted once, not more than once every 8 seconds.
Prior Visit Photo Pins
"As a caregiver, I want to see annotated photos from prior visits so that I can recognize landmarks and find the correct entrance quickly."
Description

Enables creation and reuse of geo-anchored, annotated photo pins that highlight entrances, parking spots, callboxes, elevators, and other landmarks. Photos are automatically de-identified to avoid PHI, with optional face and license plate blurring, and stored with accuracy metadata. Pins include captions and markup, support versioning, and can be flagged as outdated. Ops managers can review and approve pins before publishing and set retention policies per client. On approach, relevant pins surface contextually in the last-mile overlay and are available offline via cached assets.

Acceptance Criteria
Create Geo‑Anchored Photo Pin with Auto De‑Identification
Given the caregiver is within 15 meters horizontal GPS accuracy and online When they capture a new photo pin via the app camera Then the pin record stores latitude, longitude, timestamp, and accuracy_in_meters And the persisted image asset has all EXIF metadata removed (including GPS, device ID, and timestamps) And the image is stored server-side encrypted at rest and linked to the pin via an ID (no coordinates embedded in the image file) And the pin is saved within 2 seconds on a 4G connection
Optional Face and License Plate Blurring Toggle
Given the Face/Plate Blur setting is enabled for the pin When the image is captured or uploaded Then all detected faces and license plates are blurred in the preview And the user can adjust blur areas before saving And the saved image contains the applied blurs Given the Face/Plate Blur setting is disabled When the image is captured or uploaded Then no automatic blurring is applied
Pin Metadata: Captions, Markup, and Accuracy Recorded
Given a new photo pin draft When the user enters a caption up to 200 characters and draws markup (arrows, boxes, text) Then the caption and vector markup are saved with the pin and render identically in the last‑mile overlay Given device GPS accuracy is worse than 15 meters When attempting to save the pin Then the user is warned "Low GPS accuracy" and can Retry or Save Anyway And if saved anyway, the pin is flagged accuracy_low=true
Versioning and Outdated Pin Management
Given a published pin exists When a caregiver submits an update with a new photo or markup Then a new version is created with version incremented by 1 and the prior version retained read‑only And the latest approved version is marked current=true Given a pin version is marked outdated When caregivers view pins in the last‑mile overlay Then outdated versions are hidden from default view but remain in history for admins
Ops Manager Review and Approval Workflow
Given a pin is pending review When an ops manager opens the review screen Then they can Approve or Reject with an optional comment And the action is recorded with reviewer ID, timestamp, and version Given a pin is not approved When a caregiver views the last‑mile overlay Then unapproved pins are not visible Given a pin is approved When the route is synced Then the pin becomes visible to assigned caregivers within 1 minute
Retention Policies per Client
Given a client retention policy of N days is configured When a pin reaches N days since approval Then the pin and its media are deleted from active storage within 24 hours And the deletion is logged with pin ID, client, timestamp, and actor=system Given a mobile device has a cached copy of a deleted pin When the device next syncs Then the cached asset is purged and removed from offline access
Contextual Surfacing and Offline Availability in Last‑Mile Overlay
Given a caregiver is within 300 yards of a scheduled visit location When the last‑mile overlay activates Then the top 3 relevant pins (by proximity and tag relevance) are shown with captions and distance badges And hands‑free audio cues reference the caption of the nearest entrance pin, if present Given the device is offline and pins are cached When the overlay activates Then cached pin images and markup render within 1 second Given the device is offline and required pins are not cached When the overlay activates Then placeholder cards are shown and assets are fetched automatically upon reconnection
Hands-Free Voice UX
"As a caregiver, I want to control last-mile guidance by voice so that I can keep my hands on the wheel and eyes on the road."
Description

Provides a dedicated hands-free mode that reads last-mile cues aloud, supports voice commands such as “repeat code,” “next hint,” “call client,” and “arrived,” and locks the UI into a glance-safe state to meet hands-free regulations. Utilizes on-device TTS and ASR where available for low latency and offline operation, with configurable voice and speed. Coordinates with OS audio focus to duck media playback and supports wearables for haptic taps. Limits microphone access to active hands-free sessions and logs command intents for audit without capturing sensitive content.

Acceptance Criteria
Activate Hands‑Free and Play Last‑Mile Cues
Given an active visit route and the caregiver is within 300 yards of the destination When Hands‑Free Mode is activated by voice command "hands-free on" or via the on‑screen toggle Then the system announces "Hands‑Free started" and begins reading the current last‑mile cue within 1 second Given last‑mile cues exist (door codes, parking notes, floor/room details, prior photo pins) When cues are read Then they are delivered in the configured order and each cue is preceded by a short chime and followed by a 500 ms pause Given the caregiver reaches the final cue When the last cue finishes or the "arrived" command is received Then the session state advances to Arrived and cue playback stops
Execute Core Voice Commands
Given Hands‑Free Mode is active When the caregiver says "repeat code" Then the last door code cue is re‑read verbatim within 1 second Given Hands‑Free Mode is active When the caregiver says "next hint" Then the next pending cue becomes active and is read aloud within 1 second and the previous cue is marked completed Given a client phone number is present for the visit When the caregiver says "call client" Then the OS dialer opens and initiates a call to the correct number within 2 seconds Given Hands‑Free Mode is active When the caregiver says "arrived" Then the visit is marked Arrived with a timestamp and the session ends, releasing audio and microphone resources Given typical street noise at ~70 dBA When any core command is spoken at ~30 cm from the microphone Then command recognition accuracy is ≥85% and median end‑of‑speech‑to‑action latency is <800 ms Given a quiet environment (<40 dBA) When any core command is spoken at ~30 cm Then command recognition accuracy is ≥95% and median end‑of‑speech‑to‑action latency is <500 ms
Glance‑Safe UI Lock in Hands‑Free Mode
Given Hands‑Free Mode is active When the UI is presented Then only the following controls are visible: Repeat, Next, Call, End; each tap target is ≥48x48 dp with ≥8 dp spacing; no scrolling is required to access any control Given Hands‑Free Mode is active When the user attempts any text input Then the on‑screen keyboard does not appear and text fields are disabled Given any in‑app notification or modal would normally appear When Hands‑Free Mode is active Then notifications are suppressed or shown as brief banners that do not block controls, and no modal dialogs interrupt cue playback Given cue playback is ongoing When the device would normally sleep Then the screen remains awake until playback ends, after which normal sleep behavior resumes
Offline TTS/ASR with Configurable Voice and Speed
Given on‑device TTS and ASR engines are available When the device is offline Then core commands (repeat code, next hint, call client, arrived) and cue playback operate without network connectivity Given a cue is triggered for playback When using on‑device TTS Then time from trigger to audible speech start is ≤500 ms Given a core command is spoken When using on‑device ASR Then median end‑of‑speech to action latency is ≤800 ms Given the caregiver changes voice or speech rate (0.75x–1.25x) in settings When the next cue plays Then it uses the selected voice and rate, and the preference persists across app restarts for the same user on the same device Given on‑device engines are not available When Hands‑Free Mode is used online Then the system falls back to network TTS/ASR and surfaces a one‑time notice of reduced offline capability; core commands remain functional while online
Audio Focus and Media Ducking Coordination
Given other media is playing on the device When a cue begins Then the app requests transient audio focus with may‑duck and other media volume is reduced by ≥50% during the cue and restored within 500 ms after the cue ends Given a higher‑priority audio focus event occurs during a cue (e.g., another app requests transient focus) When focus is lost Then cue playback pauses within 200 ms and automatically resumes from the start of the interrupted cue once focus is regained Given a Hands‑Free session ends When all cue audio is complete Then the app abandons audio focus immediately and no residual ducking persists
Wearable Haptic Taps for Key Cues
Given a compatible wearable (Apple Watch or Wear OS) is connected When the caregiver enters the last 300 yards of the route Then a double‑tap haptic is sent to the wearable within 1 second Given a door code cue is about to be read When Hands‑Free Mode is active Then a single‑tap haptic is sent to the wearable 300–500 ms before the spoken code Given the caregiver issues the "arrived" command or the system marks arrival When Hands‑Free Mode ends Then a triple‑tap haptic is sent to the wearable within 1 second Given the wearable is disconnected or DND is enabled on the wearable When a haptic would be sent Then the phone vibrates instead (if not in DND) or no haptic is sent (if DND)
Microphone Access Scoping and Audit‑Safe Command Logging
Given Hands‑Free Mode is not active When the app is in foreground or background Then the microphone is not accessed and no audio is captured Given Hands‑Free Mode starts When the session indicator is shown Then microphone capture begins and the OS mic indicator appears; when the session ends (user exits or says "arrived") Then capture stops and the mic is released within 300 ms Given a voice command is processed When an audit log entry is created Then it includes timestamp, user ID, visit ID, session ID, command name, and result (success/failure) and excludes raw audio, transcripts, door codes, client phone numbers, and addresses Given an audit reviewer opens the audit report When within 60 seconds of a command execution Then the corresponding command intent entry is visible with the correct metadata and no sensitive content
Data Prefetch and Offline Cache
"As a caregiver, I want last-mile details to work without signal so that I can complete visits in dead zones or garages."
Description

Prefetches last-mile assets for the next scheduled visits, including access notes, photo pins, micro-route segments, and map tiles, storing them in an encrypted cache with LRU eviction. Performs background sync on Wi‑Fi by default (cellular per policy), uses delta updates, and provides conflict detection for edited access notes. Verifies cache integrity via checksums, exposes an offline-readiness indicator, and prompts users when sync stalls. Admins can configure retention windows, cache size caps, and prefetch horizons.

Acceptance Criteria
Overnight Wi‑Fi Background Prefetch for Next Scheduled Visits
Given the device has one or more scheduled visits within the configured prefetch horizon and is connected to Wi‑Fi When the background sync job runs Then the app prefetches for each visit: access notes (door codes, parking notes, floor/room), prior‑visit photo pins, micro‑route segments for the final 300 yards, and map tiles covering at least a 500 m radius around the destination And only deltas since the last successful sync are downloaded (unchanged assets are not re‑downloaded) And if cellular sync is disabled by policy, no cellular data is used; if enabled, cellular is used only when Wi‑Fi is unavailable And the prefetch completion status and bytes downloaded are logged per visit
Offline Readiness Indicator and Cache Usage at Visit Start
Given a visit’s last‑mile assets were successfully prefetched and the device is offline at arrival time When the user opens Last‑Mile Guide for that visit Then the offline‑readiness indicator shows Ready prior to navigation start And all prefetched assets load from cache within 2 seconds And turn‑by‑turn for the final 300 yards renders using cached micro‑route segments and map tiles without network calls
Encrypted Cache and Integrity Verification via Checksums
Given last‑mile assets are stored in the offline cache When inspecting the device storage outside the app context Then cached content is unreadable (encrypted at rest using platform‑provided secure storage) And when the app reads a cached asset, its checksum is verified And on checksum mismatch, the asset is marked invalid, evicted, and queued for re‑fetch; if offline, the UI surfaces a degraded state and does not use the invalid asset
LRU Eviction Enforced with Configurable Cache Cap and Retention Window
Given the cache size exceeds the configured cap When new assets are added to the cache Then the least‑recently‑used assets are evicted first until total cache size is at or below the cap And assets belonging to visits within the configured retention window are not evicted And eviction and retention actions are logged with asset identifiers and timestamps
Conflict Detection on Edited Access Notes During Sync
Given access notes were edited locally and the server has a different version updated after the last local sync When a sync occurs Then a conflict is detected without silently overwriting either version And the user is notified of the conflict with timestamps and authors for each version And the visit is marked with a conflict state to allow resolution per policy
Sync Stall Prompting and Recovery Options
Given a background sync has made no progress for more than the configured stall threshold When the app detects the stall Then the user is prompted with options to Retry now, Continue offline, or Enable cellular (if allowed by policy) And if the user enables cellular and policy allows, the sync resumes on cellular; otherwise it waits for Wi‑Fi And stall detection and recovery outcomes are logged
Admin Configuration of Prefetch Horizon, Cache Cap, and Retention Window
Given an admin updates the prefetch horizon, cache size cap, or retention window When the device receives the updated policy Then the app applies the new settings within 15 minutes or upon next app foreground, whichever comes first And subsequent prefetches, eviction decisions, and offline readiness reflect the new settings And the applied policy version and timestamp are visible in in‑app diagnostics
Compliance and Audit Logging
"As an operations manager, I want verifiable logs of who viewed access details and when so that I can satisfy audits and detect misuse."
Description

Captures immutable logs for all access to sensitive last-mile data, recording user ID, timestamp, location, and action type, and produces exportable, audit-ready reports linked to specific visits. Sensitive values such as door codes are masked in logs, with secure reveal events recorded. Enforces least-privilege access, supports remote wipe of cached assets, and applies configurable retention policies aligned with HIPAA and state home-health regulations. Alerts notify admins of anomalies such as repeated code reveals or failed access attempts.

Acceptance Criteria
Append-Only Audit Log for Sensitive Last-Mile Access
Given an authenticated user performs any access to sensitive Last‑Mile Guide data via mobile, web, or API When the user views masked values, reveals a value, exports a report, or is denied access Then the system writes a single append‑only log entry capturing: eventId, userId, orgId, role, visitId, actionType, timestamp (UTC ISO‑8601), deviceId, appVersion, IP, geo (lat, long, accuracy), channel, result (success/denied) And the log entry is immutable (any update attempt returns 403 and no fields change) And the entry is queryable within 5 seconds of the action And clock skew is normalized to UTC with server time authoritative
Masked Values and Secure Reveal Events
Given a user accesses a door code or other sensitive value in the Last‑Mile Guide When the value is displayed without explicit reveal Then the log stores only a masked placeholder and never the plaintext or any partial value And when the user performs an explicit reveal with a justification reason Then the system verifies permission, records a "reveal" event with userId, visitId, reason, and timestamp, without storing plaintext And the value is visible for a maximum of 60 seconds per reveal session and auto‑re‑masks thereafter And repeated reveals within 5 minutes are recorded as separate events, each requiring confirmation
Per-Visit Audit Report Export
Given a supervisor with reporting permissions selects a specific visit and a date/time range When they request an audit report export Then the system generates a CSV and a PDF within 30 seconds containing all relevant log entries linked to that visit And all sensitive values are masked in the report; reveal events list actor, time, and reason And the report includes metadata: reportId, generatedAt (UTC), filters, and total events And the export is accompanied by a SHA‑256 checksum displayed in the PDF footer and returned by the API And access to the export artifact is restricted to reporting roles and is itself logged as an "export" event
Least-Privilege Access Controls
Given role‑based permissions are configured in CarePulse When a user without the "Last‑Mile Sensitive Access" privilege attempts to view or reveal a sensitive value for an unassigned visit Then the request is denied with 403 and an "access_denied" log entry is created And when a user with the privilege accesses sensitive data only for assigned visits within their shift window Then the request succeeds and is logged as "view_masked" or "reveal" accordingly And any permission grant, revoke, or scope change is logged with actor, target user, timestamp, and change summary And break‑glass access requires justification, auto‑expires after 1 hour, and is logged with a distinct action type
Remote Wipe of Cached Last-Mile Data
Given an admin initiates a remote wipe for a device or user session When the wipe command is issued Then online devices receive the command and purge encrypted caches within 2 minutes, rotating local encryption keys And offline devices enforce the wipe on next check‑in, completing purge within 2 minutes of reconnect And after purge, the app cannot display previously cached sensitive data offline And a "remote_wipe" event with deviceId, userId, issuedAt, acknowledgedAt, completedAt, and result is recorded And failure to complete within the SLA generates an admin alert
Configurable Retention and Purge Enforcement
Given an org admin configures audit log retention policies by jurisdiction and data type When the policy is saved Then the system validates against organization minimums and regulatory templates and rejects values below the minimum And when the scheduled purge job runs Then log entries older than the effective retention are irreversibly purged, and a purge summary (counts by type and time window) is recorded as a log event And legal holds can be applied to visits or users to suspend purge until cleared, with all hold changes logged And backups and replicas observe the same effective retention policies
Anomaly Detection and Admin Alerts
Given anomaly thresholds are configured (e.g., ≥3 reveal events by the same user within 15 minutes or ≥5 denied attempts within 10 minutes) When an anomaly condition is met Then an alert is sent to org admins via in‑app notification and email within 2 minutes, including userId, visitId, counts, times, and device info And alerts are de‑duplicated per condition/user/15‑minute window and require acknowledgment And the triggering events and the alert lifecycle (created, delivered, acknowledged) are logged
Admin Configuration and QA Workflow
"As an operations manager, I want to configure and review last-mile content so that guidance is accurate, standardized, and compliant across my agency."
Description

Provides an admin console to configure required last-mile fields per client or payer, set default cue radius and prompt templates, define code expiration and rotation rules, and manage approval workflows for user-submitted photo pins and notes. Includes bulk import via CSV/API, validation rules for completeness and format, change notifications to assigned caregivers, and a sandbox mode to test guidance for a location without affecting live visits. Tracks approvals and rejections with reasons to improve data quality.

Acceptance Criteria
Client/Payer Required Field Matrix
Given I am an admin configuring last-mile fields for a client or payer, When I set fields to Required, Optional, or Hidden and save, Then the configuration persists and is retrievable for that entity. Given a visit is created for a client with an associated payer, When a user opens the last-mile data form, Then the form indicates which fields are required and shows the source of the rule (client or payer). Given at least one required field is empty or fails format rules, When the user attempts to save, Then the save is blocked and inline errors list each missing/invalid field. Given the admin updates required field settings, When closed visits are viewed, Then stored data is unchanged and no retroactive validation is enforced. Given the admin updates required field settings, When new visits are created or open visits are edited after the change, Then the new settings apply to those records.
Default Cue Radius and Prompt Template
Given org-level defaults for cue radius (in meters) and audio prompt template are set, When a new client is created, Then these defaults pre-populate the client's settings. Given I set a cue radius, When the value is outside the allowed range of 25–500 meters, Then a validation error prevents saving. Given a prompt template contains placeholders like {door_code}, {parking_note}, {floor_room}, When guidance is generated, Then placeholders are replaced with available values and missing values are omitted without rendering braces or nulls. Given no client override exists, When a visit runs guidance, Then the org default cue radius and prompt template are used; Given a client override exists, Then the client override is used. Given GPS accuracy is ±10 meters, When the caregiver enters the configured cue radius, Then the hands-free audio cue triggers once and does not repeat within 60 seconds unless the caregiver exits and re-enters the radius.
Code Expiration and Rotation Rules
Given a door/access code has an expiration date/time, When the current time is past the expiration, Then the code is excluded from caregiver guidance and an "expired" indicator is shown to admins. Given a rotation rule is set (e.g., monthly), When the next rotation window is 7 days away, Then the admin receives a reminder to update the code. Given a new rotated code is saved with an effective date, When the effective date arrives, Then guidance uses the new code and archives the previous code. Given a code is expired or archived, When exporting or viewing audit data, Then the historical code value remains accessible to admins and is stored encrypted.
Photo Pins and Notes Approval Workflow and Audit Trail
Given a caregiver submits a photo pin or last-mile note, When the submission is saved, Then its status is Pending Review and it is excluded from live guidance. Given I am an approver, When I approve a pending submission, Then its status becomes Approved and it appears in live guidance within 2 minutes. Given I am an approver, When I reject a pending submission, Then I must provide a reason (minimum 10 characters), the status becomes Rejected, and the submitter is notified with the reason. Given a submission changes status (Approved or Rejected), When viewing its details, Then the audit log shows action, actor, timestamp, and (for rejections) the reason.
Data Validation for Import and Manual Entry
Given the CSV import template is used, When an admin uploads a CSV, Then the system validates headers, required columns (e.g., client_id, payer_id, door_code, parking_notes, floor_room, cue_radius, expiration_date), data types (ISO 8601 dates, integers for cue_radius), and per-field formats, rejecting invalid rows. Given a CSV contains both valid and invalid rows, When processing completes, Then valid rows are imported and invalid rows are skipped; a results report shows totals and row-level errors, and the admin can download the error report. Given data is submitted via API, When the payload contains validation errors, Then the response returns per-record statuses and error messages and no invalid records are created. Given a user enters data manually in the admin console, When validation fails, Then inline messages display and saving is disabled until all required fields are valid.
Change Notifications to Assigned Caregivers
Given caregivers are assigned to a client, When an admin changes last-mile fields, codes, cue radius, or prompt templates for that client, Then each assigned caregiver receives an in-app notification within 5 minutes listing the changed items and effective time. Given a caregiver opens the notification, When they follow the link, Then they are taken to the client's last-mile guidance details reflecting the updated configuration. Given a notification cannot be delivered (e.g., device offline), When the caregiver next opens the app, Then the notification is queued and delivered at that time.
Sandbox Mode for Location Guidance Testing
Given sandbox mode is enabled for a location, When an admin runs a guidance test, Then the system uses sandbox data and does not affect live visits, caregiver guidance, or analytics. Given sandbox mode is active, When viewing the UI, Then a persistent Sandbox indicator is visible and any edits are stored only in the sandbox context. Given sandbox mode is active, When simulating approach within the configured cue radius, Then audio cues and prompts render using the current configuration as they would in production. Given sandbox mode is disabled, When returning to live mode, Then no sandbox data is promoted to production and production data remains unchanged.

Drift Insights

Surfaces historical drift patterns by zone, time of day, caregiver, and payer with heatmaps and trend lines. Suggests schedule buffers, route reshuffles, or caseload swaps based on real bottlenecks. Empowers managers to make data‑backed adjustments that stick.

Requirements

Drift Metric Framework
"As an operations manager, I want consistent, trustworthy drift metrics across caregivers, zones, and payers so that I can compare patterns and make decisions with confidence."
Description

Define and compute core drift KPIs (e.g., on-time rate, early/late variance, dwell time, route deviation, missed/short visits) across dimensions (zone, time-of-day, caregiver, payer). Establish baselines and thresholds, normalize by service type and visit length, and support time-bucketing (hourly/daily/weekly). Handle incomplete or conflicting signals (EVV, GPS, voice notes, IoT sensors) via reconciliation rules. Expose a consistent metrics API for visualizations, recommendations, and reporting. Ensure timezone-aware calculations, data lineage, and versioned metric definitions for auditability.

Acceptance Criteria
Compute Core Drift KPIs per Dimension
Given an agency with a defined timezone, a date range, and dimension filters (zone, time_of_day, caregiver, payer) When the Drift Metric computation is executed for the selection Then it returns for each dimension combination: on_time_rate, early_variance_min, late_variance_min, dwell_time_min, route_deviation_rate, missed_visit_count, short_visit_count using the following definitions And on_time_rate = round(on_time_visits / total_eligible_visits, 3) where on_time_visits have |actual_start - scheduled_start| <= 5 minutes And early_variance_min = round(avg(scheduled_start - actual_start for visits where actual_start < scheduled_start), 1) And late_variance_min = round(avg(actual_start - scheduled_start for visits where actual_start > scheduled_start), 1) And dwell_time_min = round(avg(actual_end - actual_start), 1) And missed_visit_count = count of scheduled visits with no actual_start within [scheduled_start - 15m, scheduled_end + 60m] And short_visit_count = count of visits where actual_duration < scheduled_duration * 0.8 And aggregated rates are weighted by their denominators and counts are additive across child dimensions
Route Deviation Metric Definition
Given a planned route polyline per visit and GPS track points sampled at ≤60-second intervals When computing route deviation for a visit Then the visit is marked deviated if the 30s rolling-median perpendicular distance from track to planned polyline > 200 meters for ≥ 3 consecutive minutes or any instantaneous point is > 1000 meters from the polyline And route_deviation_rate = round(deviated_visits / total_eligible_visits, 3) for the selection And when a planned polyline is unavailable, deviation is computed against a straight-line corridor between scheduled locations with a 150-meter buffer And GPS noise is smoothed using a 30-second rolling median before distance calculations And visits without any GPS samples are excluded from the route_deviation_rate denominator and flagged no_gps=true in lineage
Timezone-Aware Hourly/Daily/Weekly Bucketing
Given an agency timezone (e.g., America/Chicago) and a date range that includes DST transitions When requesting hourly, daily, or weekly metric buckets Then bucket boundaries are computed in the agency timezone; day = 00:00:00–23:59:59 local; week starts Monday 00:00:00 local And hourly buckets during fall-back yield 25 buckets and during spring-forward yield 23, with non-overlapping local-time labels And caregiver events recorded in other timezones are converted to the agency timezone prior to bucketing And all timestamps in responses include the timezone offset and IANA zone identifier And metrics aggregated across buckets equal the corresponding unbucketed totals within documented rounding tolerances
Normalization by Service Type and Visit Length
Given visits of varying scheduled durations and service types with standard_duration_minutes defined When computing normalized drift metrics Then for each visit v: early_pct_v = (max(0, scheduled_start - actual_start)_minutes / scheduled_duration_minutes_v) * 100; late_pct_v = (max(0, actual_start - scheduled_start)_minutes / scheduled_duration_minutes_v) * 100; utilization_pct_v = (actual_duration_minutes_v / scheduled_duration_minutes_v) * 100 And early_variance_pct = round(avg(early_pct_v), 1); late_variance_pct = round(avg(late_pct_v), 1); dwell_utilization_pct = round(avg(utilization_pct_v), 1) for the grouping And cross–service-type aggregations use these normalized percentages for averages; raw minute-based metrics remain available with the _min suffix And service types lacking standard_duration_minutes default to scheduled_duration_minutes for normalization and are flagged defaulted_standard_duration=true in lineage
Signal Reconciliation and Confidence Scoring
Given EVV, GPS, IoT sensor, and voice note signals that may be incomplete or conflicting When deriving actual_start and actual_end per visit Then source precedence is EVV > GPS > IoT > voice for both start and end times And if EVV start exists but EVV end is missing and a non-EVV end candidate exists within 10 minutes after the last EVV/GPS event, set actual_end to that candidate and set end_inferred=true; otherwise set actual_end=null and exclude the visit from duration-based metrics And if the difference between the selected (highest-precedence) time and the next available signal exceeds 10 minutes, set confidence="low" and confidence_score according to source (EVV=1.0, GPS=0.8, IoT=0.7, voice=0.6) And if no signals exist within [scheduled_start - 60m, scheduled_end + 120m], count the visit as missed and exclude it from other denominators And all derived fields include source_attribution and inference flags in lineage
Baselines and Threshold Bands
Given at least 28 days of historical metrics prior to the selected range end When computing baselines and thresholds per dimension and bucket Then the baseline for a rate metric (e.g., on_time_rate) is the 28-day rolling median at the same local hour/day bucket, excluding the most recent 24 hours And the variability band is median ± 2*MAD (median absolute deviation) in the same units; for count metrics, use a Poisson 95% CI approximation And if fewer than 7 historical days exist for a bucket, fall back to agency-wide baselines; otherwise set baseline_insufficient=true And baseline values are timestamped (baseline_computed_at), carry metric_version/definition_id, and are retrievable via the Metrics API with include=baseline
Metrics API Contract, Versioning, and Lineage
Given clients request metrics via GET /metrics/drift with query params (agency_id, start, end, tz, dimensions[], buckets, filters, include, version, page, page_size) When a valid request is made Then the response includes per row: dimension values, bucket window, metric fields, metric_version, definition_id, effective_from, timezone, lineage.trace_id, source_counts, and confidence fields, plus paging metadata And unknown or invalid params return HTTP 400 with machine-readable error codes; unauthorized returns 401; forbidden returns 403; page_size above max is clamped and indicated in the response And omitting version returns the latest stable metric_version; specifying an older version returns values computed under that definition without retroactive recomputation And schema-breaking changes increment metric_version; prior versions remain queryable for at least 12 months And numeric fields include explicit units in names or metadata and are rounded per metric definitions
Segmented Heatmaps & Trends
"As a field operations lead, I want to visualize where and when drift occurs so that I can quickly identify hotspots and prioritize interventions."
Description

Provide interactive, mobile-first heatmaps and trend lines that surface drift patterns by zone, time-of-day, caregiver, and payer. Support filtering, multi-select comparisons, and drill-down from org-wide to caregiver-level views. Include tooltips with metric definitions, anomaly markers, and confidence indicators. Enable pinch/zoom on mobile, fast rendering for 12–24 months of data, and export as PNG/CSV. Respect role-based access, masking PHI where required.

Acceptance Criteria
Mobile Heatmap: Filter & Multi‑Select Compare by Zone and Time‑of‑Day
Given a Manager or Analyst on a mobile device with access to Drift Insights, When they open the Heatmap view, Then the default segmentation shows drift intensity by Zone and Time‑of‑Day for the last 30 days. Given the Filter panel, When the user multi‑selects 2–5 Zones and chooses one or more Time‑of‑Day buckets, Then the heatmap updates within 1.5 seconds to reflect only the selected segments and displays an on‑chart legend of active filters. Given Comparison mode is toggled on, When the user selects two segments (e.g., Zone A vs Zone B), Then the heatmap presents a side‑by‑side comparison with clear labels and a visible delta indicator per matched cell. Given a heatmap cell is tapped, When the tooltip appears, Then it shows metric name, definition, date range, segment keys, drift %, baseline, and sample size (n), and an anomaly icon if applicable.
Drill‑Down and Breadcrumb Navigation: Org → Zone → Caregiver
Given the org‑wide view with active filters, When a user taps a Zone label or cell, Then the view drills down to Zone‑level caregiver distribution within 1.0 second while preserving the active filters. Then a breadcrumb trail "Org > Zone > Caregiver" is visible; When the user taps a breadcrumb level, Then the view navigates back within 500 ms and restores the previous viewport and filter states. Given a caregiver is selected at Zone level, When drilling into caregiver‑level, Then identifiers and metrics are scoped to that caregiver and reflect the current date range and filters.
Trend Lines: Segmentation by Payer and Caregiver with Anomaly Markers and Confidence Bands
Given the Trends tab, When the user selects 1–5 Payers and optionally 1–5 Caregivers, Then distinct color‑coded lines render with a legend and a 0% baseline reference. Then anomaly markers appear at points flagged by the detection logic; When a marker is tapped, Then a tooltip shows anomaly type, magnitude, date/time, and contributing segments. Then shaded confidence bands render around each line; When the user toggles "Show confidence", Then bands show/hide within 300 ms without reloading the page. When the user expands the date range up to 24 months, Then the chart updates within 2.0 seconds and maintains correct legends and axis scales.
Role‑Based Access Control & PHI Masking Across Views and Exports
Given RBAC roles (Admin, Manager, Analyst, Caregiver), When a non‑Admin/Manager accesses heatmaps or trends, Then caregiver identifiers other than self are masked (e.g., "CG‑###") and no patient PHI is displayed anywhere in UI or tooltips. Given an API request for data outside the user's scope, When attempted, Then the backend returns HTTP 403 and the UI shows an authorization message without leaking restricted counts. Given Export actions (PNG/CSV), When executed by any role, Then exported content reflects the same masking and scope as on‑screen and excludes restricted dimensions. Then all access events (view, export, drill‑down) are audit‑logged with user ID, timestamp, resource, and scope.
Mobile Gestures & Interactions: Pinch/Zoom, Pan, and Touch Targets
Given a mobile device, When the user performs pinch/zoom on heatmaps or trends, Then the chart zooms smoothly at ≥60 FPS during the gesture and settles within 200 ms after gesture end. When the user pans a zoomed view, Then content moves without jitter and axes/legends update to match the visible range. When the user double‑taps, Then the zoom resets to fit the full chart. Then all interactive controls and legend chips have touch targets ≥44×44 pt and provide visual feedback on tap.
Performance at Scale: 12–24 Months Rendering and Interaction Latency
Given up to 24 months of data (≤200k visits total), When loading heatmaps or trends on a mid‑tier mobile device (e.g., iPhone 12/Pixel 5) over 4G, Then time‑to‑first‑render is ≤2.5 s for 12 months and ≤3.0 s for 24 months. Then subsequent interactions (filter change, segment toggle, drill‑down) complete and redraw in ≤600 ms for cached data. Then the page remains responsive (Interaction to Next Paint ≤200 ms p95) and memory usage stays ≤300 MB during typical interactions without browser crashes.
Exports: PNG and CSV with Active Filters, Context, and Anonymization
Given a visible chart (heatmap or trend) with active filters and drill‑down context, When the user taps Export PNG, Then a PNG is generated within 3 seconds at ≥2× device pixel ratio including chart title, legend, active filters, date range, and timestamp. Given the same context, When the user taps Export CSV, Then a CSV is generated within 3 seconds containing columns: date/time bucket, zone, time‑of‑day, caregiver identifier (masked as applicable), payer, metric value(s), anomaly flag, confidence bounds (lower/upper), and sample size (n), and only includes data in scope. Then on mobile, Export invokes the native share sheet; When the device is offline, Then export actions are disabled with a tooltip explaining connectivity is required.
Recommendation Engine for Buffers & Reassignments
"As a scheduling manager, I want actionable suggestions with clear impact so that I can implement improvements quickly without breaking compliance or overloading staff."
Description

Generate data-backed suggestions such as schedule buffers, route reshuffles, or caseload swaps to reduce drift. Optimize under operational constraints (caregiver qualifications, payer rules, visit windows, travel time, labor limits). Provide confidence scores, expected impact (on-time rate, drive time), and plain-language rationale. Offer one-click application to create draft schedule changes in CarePulse with undo/rollback and change logs.

Acceptance Criteria
Constraint-Compliant Suggestions for Weekday AM Zone Routes
Given a zone and time window are selected and current schedules are loaded And caregiver, payer, visit window, travel, and labor constraint catalogs are configured When the engine generates suggestions Then 100% of suggestions satisfy required visit qualifications for assigned caregivers And 100% of suggestions respect payer authorization rules (units, visit length, frequency) And 100% of suggestions keep visit start times within configured visit windows And 100% of suggestions meet labor limits (daily/weekly hours, breaks, overtime policies) And 100% of suggestions are feasible under travel time using the configured routing profile And any candidate that violates a constraint is excluded and logged with a reason code
Complete Suggestion Metadata with Confidence and Impact
Given suggestions are generated for the selected scope When suggestions are displayed Then each suggestion includes fields: suggestionId (UUID), type ∈ {buffer, route_reshuffle, caseload_swap}, affectedCaregivers, affectedVisits And each suggestion includes confidenceScore as integer 0–100 And each suggestion includes expectedImpact.onTimeRateDelta (percentage points, signed) and expectedImpact.driveTimeDelta (minutes, signed) And each suggestion includes a plain-language rationale ≥ 20 characters that references at least one observed pattern (zone, timeWindow, caregiver, or payer) And all numeric values show units and sign relative to baseline
One-Click Apply to Draft with Undo and Change Log
Given a user with Scheduler or Manager role views a suggestion When the user clicks Apply Then a Draft change set is created containing all proposed modifications And the Draft change set is visible in CarePulse scheduling with status "Draft" And a Change Log entry is created with {userId, timestamp, suggestionId, action: APPLY, preSnapshotId, postSnapshotId, rationale, confidenceScore, expectedImpact} And an Undo control is available that reverts the Draft change set in one click within 24 hours, creating a Change Log entry with action: UNDO And no Published schedules are modified until the user explicitly publishes the Draft
Do-No-Harm Thresholds and Override Flow
Given baseline metrics for the selected scope are known When the engine evaluates candidate suggestions Then it suppresses any suggestion whose projected on-time rate delta < -1.0 percentage point or projected drive time delta > +5.0% And such suppressed candidates are listed in diagnostics with reason codes And the user may override by enabling "Allow tradeoffs" and entering a free-text justification (≥ 10 characters), after which suppressed candidates can be applied and are flagged in the Change Log
Backtested Impact Accuracy by Confidence Tier
Given a 4+ week historical backtest dataset with realized outcomes When the engine runs in simulation mode Then for suggestions with confidenceScore ≥ 80, the mean absolute error of onTimeRateDelta ≤ 2.0 percentage points and driveTimeDelta ≤ 3 minutes And for suggestions with confidenceScore 50–79, the mean absolute error of onTimeRateDelta ≤ 4.0 percentage points and driveTimeDelta ≤ 7 minutes And the direction of predicted impact matches actual ≥ 75% of the time overall
Performance and Responsiveness at Zone Scale
Given a zone with ≤ 200 visits/day and ≤ 40 caregivers When the engine generates suggestions Then the first suggestion set returns within 5 seconds at the 95th percentile And for 201–600 visits/day, results return within 12 seconds at the 95th percentile And the UI shows a non-blocking loading indicator during processing And timeouts (> 15 seconds) surface a retry option and do not freeze the UI
Graceful Handling of Insufficient Data or Conflicts
Given the selected scope has insufficient data (< 50 visits in the lookback) or constraints are mutually conflicting When the user requests suggestions Then the engine returns zero suggestions and displays "No safe suggestions available" with at least one reason code from the defined list And the message provides one actionable next step (e.g., expand date range, adjust filters) And an event is logged with {userId, timestamp, scope, reasonCodes}
What-if Drift Simulator
"As a regional supervisor, I want to simulate adjustments before applying them so that I can choose the most effective plan with minimal risk."
Description

Allow managers to test hypothetical changes (e.g., add 10-minute buffers to AM visits in Zone A, swap caregiver assignments on Tuesdays) and preview projected KPIs (on-time rate, overtime risk, mileage). Support side-by-side scenario comparison, save/share scenarios, and annotate decisions. Use the same constraints and cost models as the recommendation engine to ensure fidelity.

Acceptance Criteria
Add 10-Minute AM Buffers in Zone A
Given a manager selects Zone A and a 7-day horizon with AM window (06:00–12:00) When they apply a +10 minute buffer to all AM visits in Zone A and run the simulation Then projected KPIs display on-time rate (%), overtime risk (projected overtime hours), and total mileage (mi) And the UI shows absolute values and ±% deltas versus baseline for each KPI And only AM visits in Zone A are affected; other zones/times remain within ±0.1% of baseline And results are computed and rendered in ≤ 10 seconds for ≤ 1,000 visits in the selected horizon
Swap Tuesday Caregiver Assignments
Given a manager selects Tuesday and two caregivers to swap assignments When the swap is applied and the simulation is run Then hard constraints (licensure, max hours, payer rules, visit time windows, travel-time feasibility) are validated before computing results And if any hard constraint would be violated, the simulation is blocked with a clear error listing the violating visits and rules; no KPIs are shown And if only soft constraints (continuity, preference) are impacted, results are computed with visible warnings and penalty scores And KPIs update to reflect the swap with deltas versus baseline
Compare Up To 4 Scenarios Side-by-Side
Given a baseline and at least two computed scenarios exist When the comparison view is opened Then the user can select 2–4 scenarios to compare side-by-side including the baseline And the same date range and filters are applied across all compared scenarios And KPIs (on-time rate, overtime risk, total mileage) are shown in columns with best values highlighted and deltas versus baseline And sorting by any KPI reorders the scenario columns accordingly
Save, Share, and Reopen Scenarios
Given a computed scenario When the user saves it with a name and optional description Then the system stores author, timestamp, data snapshot ID/timestamp, and modelVersion ID When the scenario is shared with specified teammates as view or edit Then recipients can access per granted role and open the scenario to reproduce identical KPIs (±0.1%) against the same data snapshot And duplicating a scenario creates a new scenario with a new ID without altering the original
Annotate Scenarios with Decisions
Given a saved scenario When the user adds an annotation containing rationale and decision Then the annotation is attributed to the author, timestamped, and supports plain text up to 2,000 characters And annotations are visible to viewers with access and persist across scenario edits and duplicates And removing an annotation requires confirmation and is recorded in the scenario history
Fidelity to Recommendation Engine Models
Given the recommendation engine exposes a constraints/cost model version ID When the simulator runs any scenario Then it invokes the same engine and displays the modelVersion ID used And for a standard test dataset and change set, simulator KPIs match the engine’s direct outputs within ±0.1% for on-time rate and ±0.5% for mileage and overtime hours
Infeasible Scenario Handling and Messaging
Given a set of edits that renders the schedule infeasible under hard constraints When the simulation is executed Then the system returns a "No feasible solution" state in ≤ 10 seconds and lists the top binding constraints and affected visits And Save and Share actions are disabled until the scenario is modified to become feasible
Real-time Drift Data Pipeline
"As a compliance-focused operations manager, I want up-to-date drift data so that I can intervene during the same day rather than after issues escalate."
Description

Ingest, normalize, and sync data from scheduling, EVV/telephony, GPS, voice notes, and optional IoT sensors in near real time. Implement id mapping, deduplication, late-arriving data handling, and backfill. Perform data quality checks with alerts for anomalies (e.g., missing clock-ins). Ensure scalable storage for 24 months, GDPR/HIPAA-aligned retention, and low-latency updates to metrics and visualizations.

Acceptance Criteria
Near-Real-Time Multi-Source Ingestion & Normalization
Given authorized connectors for scheduling, EVV/telephony, GPS, voice notes, and IoT sensors are configured When events are produced by any source Then 95th percentile end-to-end latency from event_time to availability in the analytics store is <= 60 seconds and max latency <= 120 seconds Given a sustained load of 2,000 events per minute with bursts of 10,000 events per minute for up to 5 minutes When the pipeline runs Then no data loss occurs and latency SLOs are met Given source clock skew up to ±2 minutes When normalizing records Then event_time, ingest_time, source_id, and canonical timezone are recorded and used consistently in downstream computations Given an unexpected or new field appears in a payload When parsing the message Then the pipeline does not fail, unknown fields are captured for later processing, and a non-critical alert is emitted
Deterministic ID Mapping & Deduplication Across Sources
Given caregiver, client, visit, and payer identifiers from multiple sources When records are ingested Then 99.5%+ records are assigned a canonical ID via the mapping table and unmapped records are quarantined with an alert Given two or more events represent the same real-world action (same external_id or composite key within a 5-minute window) When processed Then only one canonical record exists in storage via idempotent upsert using a deterministic event_id Given a mapping change merges or splits entities When the mapping table is updated Then affected historical records are relinked and impacted aggregates are recomputed within 10 minutes without creating duplicates
Late-Arriving Events Handling & Backfill
Given events may arrive up to 7 days late When they are ingested Then affected aggregates and Drift Insights metrics are updated within 10 minutes and no double counting occurs Given a backfill of the last 30 days is initiated for an agency When the job runs Then throughput averages >= 5,000 events/second and real-time ingestion p95 latency does not exceed 120 seconds during the backfill Given watermarks are in place per data source When queries are executed Then the UI displays the most recent watermark time ("complete through") and excludes data beyond the watermark from finalized metrics
Automated Data Quality Checks & Anomaly Alerts
Given the schedule indicates a visit start time When 10 minutes have elapsed without an EVV clock-in for that visit Then a missing clock-in anomaly is created and an alert is sent to the configured channel within 2 minutes with visit, caregiver, client, and link to details Given GPS pings are processed When calculated travel speed exceeds 150 km/h or GPS drift exceeds 1 km within 1 minute Then an outlier anomaly is recorded and visible in the QA dashboard Given any source stops sending data When 5 minutes of inactivity are detected Then a high-priority pipeline health alert is sent and reflected in the status page Given repeated anomalies for the same entity within a short window When alerts are generated Then they are deduplicated to at most one alert per entity per 15 minutes
Low-Latency Updates to Drift Insights Visualizations
Given new events that affect drift metrics are ingested When the user views heatmaps or trend lines Then the metrics reflect those events within 60 seconds p95 and the UI shows a Last updated timestamp Given a manual refresh is requested When the dashboard reloads Then data freshness is within 15 seconds of current watermark Given recent late-arriving data changes prior aggregates When the dashboard is viewed Then deltas are reflected and a metrics revised badge is shown for impacted time buckets
Secure, Scalable Storage & Retention Compliance (24 Months, GDPR/HIPAA)
Given production operation When data is stored Then PHI is encrypted at rest (AES-256) and in transit (TLS 1.2+), access is role- and tenant-scoped, and access events are audit-logged for 24 months Given retention policies When records exceed 24 months of age or a data subject deletion request is approved Then records are purged from raw, normalized, aggregates, indices, and backups within 30 days with an auditable deletion log Given growth in tenant volume When total stored normalized event data reaches 2 TB Then query performance for 30-day metrics remains p90 < 3 seconds and p99 < 10 seconds Given non-production environments When data is replicated Then direct identifiers are masked or tokenized and no PHI is exposed to non-production roles
Operational Observability, Retries, and Dead-Letter Handling
Given transient downstream failures When writes fail Then automatic retries with exponential backoff occur up to 3 attempts before sending the record to a dead-letter queue without data loss Given records land in the DLQ When the operator triggers a replay with updated configs Then 99%+ of DLQ records are reprocessed successfully and removed from the DLQ with original ordering preserved where applicable Given pipeline health SLIs are defined (throughput, lag, error rate) When viewed in the Ops dashboard Then SLOs are displayed and alert thresholds are configured with on-call routes Given a deploy When a new pipeline version is released Then canarying is applied to at most 10% of traffic for 15 minutes with automatic rollback on error rate > 1%
Audit-Ready Drift Reports
"As an agency owner, I want audit-ready drift documentation so that I can satisfy payer reviews and demonstrate continuous improvement without manual compilation."
Description

Produce one-click, exportable reports (PDF/CSV) summarizing drift trends, root causes, and actions taken, segmented by payer and location. Include metric definitions, methodology notes, and change history for schedule modifications applied via Drift Insights. Support scheduled delivery, branding, and secure sharing with auditors and payer partners, respecting role-based permissions.

Acceptance Criteria
One-Click Export (PDF/CSV) of Audit-Ready Drift Report
Given a user with Reports:Export permission selects a date range ≤ 180 days and clicks Export, When PDF is selected, Then a PDF downloads within 10 seconds (p90) containing sections: Drift Trends, Root Causes, Actions Taken, and Segmentation Summary. Given the same selection, When CSV is selected, Then a CSV downloads within 10 seconds (p90) containing rows for drift metrics with segment columns (payer, location, zone, caregiver, time_of_day) and fields for root_cause and action_taken. Given any export, When the file is opened, Then the header shows report title, tenant name, report ID, generation timestamp with timezone, selected date range, and applied filters. Given the PDF export, When reviewed, Then it includes heatmaps and trend lines visualizing drift by zone, time of day, caregiver, and payer. Given no matching data for the selected filters, When export is requested, Then a valid file downloads that clearly states “No data for selected filters” without error.
Segmentation and Filtering by Payer and Location
Given multi-select filters for payer and location are available, When specific payers and locations are selected, Then all report metrics and visuals recalculate to include only the selected scope. Given user-defined filter presets are supported, When a preset is applied before export, Then the report uses that preset and displays the preset name in the header. Given conflicting filters produce zero results, When applied, Then the UI shows 0 results and export produces a valid file with empty-state sections. Given time-of-day buckets are defined in tenant settings, When exporting, Then the same bucket labels are used consistently across tables and charts.
Metric Definitions and Methodology Included in Exports
Given a PDF export is generated, When the “Definitions & Methodology” appendix is opened, Then it lists formulas and units for drift rate, on-time threshold, late window, root cause categories, and action types. Given a CSV export is generated, When the file is opened, Then the first commented lines (prefixed with “#”) include metric definitions, data sources, calculation windows, and configuration version. Given report filters and time zone are set, When exporting, Then the methodology section states the time zone used, data freshness timestamp, and inclusion/exclusion rules. Given metric configuration is updated in-app, When exporting afterward, Then the report displays the configuration version and effective date matching the current settings.
Change History of Schedule Modifications Included
Given schedule modifications made via Drift Insights fall within the selected date range, When exporting, Then the report includes a Change History listing change_id, timestamp, actor, affected entity, before_value, after_value, rationale, and status (applied/reverted). Given no modifications exist within range, When exporting, Then the Change History section appears with a clear “No changes in range” message. Given any change references caregiver or client, When exported, Then the report shows caregiver ID and anonymized client ID only (no PHI such as full names or DOB). Given a change_id link appears in the PDF, When clicked by an authenticated and authorized user, Then it opens the corresponding change record in-app.
Scheduled Delivery with Branding
Given an Admin schedules weekly delivery at 07:00 America/Chicago to specific recipients, When the schedule triggers, Then recipients receive the report within 15 minutes with the correct date range applied. Given tenant branding (logo, colors, header/footer) is configured, When exporting or delivering, Then the PDF reflects tenant branding and the CSV includes tenant name in header comments (no color styling). Given delivery settings allow attachment or secure link, When configured for attachment, Then the PDF is attached; When configured for link, Then only the secure link is included (no attachment). Given a scheduled run fails, When 3 retries are exhausted, Then the owner is notified and the run is recorded as Failed with error details. Given a “Send test now” action is used, When executed, Then only the requester receives a test email within 2 minutes and the PDF shows a TEST watermark.
Role-Based Permissions and Secure Sharing
Given a Manager with Reports:Export permission initiates a share, When scope is limited to selected payers/locations, Then recipients can only access data within that scope. Given an external auditor without an account must view a report, When a secure link is created, Then the system issues a tokenized link scoped to selected filters that expires after 7 days by default. Given a secure link is expired or revoked, When it is accessed, Then access is denied with an “expired or revoked” message and the attempt is logged. Given an unauthorized user without Reports:View permission obtains a link, When they attempt access, Then the system denies access regardless of possession of the link. Given any export or share action completes, When auditing the system, Then an audit log entry records actor, timestamp, scope (payers, locations, date range), recipients or delivery method, and outcome (success/failure).

Offline Sentinel

Maintains drift detection and ETA estimates when connectivity drops by caching routes, maps, and geofences on device. Queues actions and EVV stamps for automatic sync once online. Keeps rural, elevator, and basement visits compliant and on‑time even off‑grid.

Requirements

On-Device Route, Map, and Geofence Cache
"As a caregiver working in low-connectivity areas, I want my routes and geofences available offline so that I can navigate and verify visits without internet access."
Description

Prefetch and store route legs, turn-by-turn instructions, map tiles (with configurable radius buffer around planned routes), and geofence definitions on the device at shift start and on route updates. Cache must be encrypted at rest, support versioning, LRU eviction, TTLs, and a configurable storage cap. Provide graceful fallback to last-known good cache if refresh fails. Support background prefetch triggers upon assignment changes. Ensure compatibility with iOS/Android offline map capabilities and handle tile compression for low storage devices. Expose cache health metrics and per-visit cache readiness to the UI.

Acceptance Criteria
Shift Start Prefetch Success
Given org config tileBufferMeters=500 and prefetchTimeoutSeconds=120 and a caregiver starts a shift with 8 scheduled visits and network connectivity is available When the shift start event is received Then the app triggers prefetch within 5 seconds And for each visit caches route legs, turn-by-turn instructions, geofence definitions, and all map tiles covering the route polyline plus a 500m buffer And marks each visit cacheReady=true before the first ETA is computed And completes prefetch within 120 seconds And records metrics tilesFetched, bytesCached, durationMs, and errorsCount per visit
Background Prefetch on Assignment Change
Given a caregiver is mid-shift and receives a new or updated visit assignment While the app may be in background When the assignment change event is processed Then a background prefetch starts within 10 seconds And caches route legs, turn-by-turn instructions, geofence definitions, and buffered map tiles for the affected visit(s) And does not interrupt any active navigation session And sets cacheReady=true for the new/updated visit within prefetchTimeoutSeconds (<=120s) or reports a prefetch error if exceeded And emits a PrefetchStarted and PrefetchCompleted (or PrefetchFailed) telemetry event with timings
Fallback to Last-Known Good Cache on Refresh Failure
Given a previous cache version v1 exists for today’s visits And a refresh to version v2 is initiated When the device loses connectivity or the content server returns errors During refresh Then the app retains and uses v1 for ETAs, drift detection, and map rendering without crashing And flags the cache as stale in metrics (stale=true, lastRefreshError set) And displays a non-blocking UI indicator that offline cache is stale And schedules a retry with exponential backoff starting at 30 seconds up to 10 minutes
Encryption At Rest and Key Lifecycle
Given platform keystore is available (iOS Secure Enclave/Keychain, Android Hardware-Backed Keystore) When caching any route, instruction, geofence, or tile artifact Then data at rest is encrypted using AES-256-GCM with a per-installation key stored in the platform keystore And no cached file is readable in plaintext via filesystem inspection And on user logout or org key rotation the encryption key is invalidated and all cached artifacts are wiped within 5 seconds And subsequent cache reads without re-authentication fail and trigger a re-prefetch on next login
Storage Cap, LRU Eviction, TTL Expiry, and Tile Compression
Given org config storageCapMB=500 and ttlHours=24 and device free space may be limited When a prefetch would exceed the storage cap Then the cache evicts least-recently-used items not associated with the current shift until total size <= 500MB And if still over cap or device free space < 100MB, the app compresses map tiles to target quality 80 and retries And achieves >=30% average size reduction across tiles selected for compression And if size constraints cannot be met, sets cacheReady=false for affected visits and raises a LowStorage warning event And when TTL expires for any cached item Then the item is marked stale and a refresh is queued immediately if online or deferred until connectivity returns while remaining usable
Cross-Platform Offline Cache Parity (iOS and Android)
Given supported devices on iOS 15+ and Android 11+ When prefetch completes and devices are switched to airplane mode Then both platforms render the same routes and turn-by-turn instructions and enforce geofences using only cached data And ETAs and drift detection operate using cached artifacts without network calls And parity tests show map coverage area difference <=5% and instruction step count difference <=1 step between platforms And no platform-specific crashes or background task violations occur during prefetch or use
Cache Health Metrics and UI Readiness Indicators
Given a caregiver opens the Today view When viewing each scheduled visit Then the UI displays cache readiness (Ready, Stale, Not Ready) with reason codes (e.g., TTLExpired, LowStorage, PrefetchFailed) And a cache health panel exposes lastPrefetchAt, cacheVersion, tilesCount, bytesCached, ttlExpiresAt, stale flag, and lastError And these metrics are also emitted to telemetry within 5 seconds of prefetch completion or failure And the per-visit cacheReady state updates in under 2 seconds after cache state changes
Offline ETA & Drift Detection
"As a caregiver, I want accurate ETAs and drift alerts while offline so that I stay on schedule and can correct my route quickly."
Description

Compute ETAs and on-route drift locally using GPS, cached maps, and schedule data when the device is offline. Detect deviations from planned route or schedule using configurable thresholds (distance/time). Continuously update ETA and provide in-app and local notifications for drift, including suggested corrective actions based on cached routing. Record drift events and ETA history for later sync. Support degraded positioning (e.g., last known fix, accelerometer dead-reckoning) when GPS is weak. Ensure CPU/battery use stays within mobile constraints and runs reliably in the background.

Acceptance Criteria
Offline Local ETA Calculation
Given the device is offline and today’s route and map tiles are cached When the caregiver is en route to the next scheduled visit and a new location update is received Then the app computes ETA locally within 2 seconds using cached routing and current/last-known position And updates ETA at least every 15 seconds while speed > 0.5 m/s and every 60 seconds when stationary And displays an "Offline ETA" indicator in the UI And timestamps and stores each ETA in local history And no network requests are attempted while offline (verified via network inspector)
Drift Detection Thresholds & Alerting
Given drift thresholds are configured as off-route distance ≥ 150 m or behind-schedule ≥ 5 min and hysteresis = 10 s And the device is offline with a cached route loaded When the user’s trajectory violates any threshold continuously for ≥ 10 s Then a drift event is recorded with type (distance/time), threshold exceeded, current location, planned waypoint, and timestamp And a local notification and in-app banner are shown within 5 seconds of detection And the UI shows delta distance/time and the next planned waypoint And repeat alerts are suppressed until the user returns within threshold or 3 minutes elapse (whichever comes first)
Offline Corrective Route Suggestions
Given cached map graph and planned route are available And a drift event has been detected while offline When corrective routing is requested automatically upon drift detection Then the app computes a corrective path to the next scheduled waypoint within 3 seconds using cached data And presents the top suggestion with estimated added time (ΔETA) and the first two maneuvers And if the user taps "Resume route", the ETA is recalculated immediately and drift state clears once within threshold And the suggestion and user action are logged locally for later sync
Degraded Positioning Fallback Logic
Given GPS is weak (no valid fix for 30 s or HDOP > 5) while offline When movement is detected via accelerometer/gyroscope Then the app switches to dead-reckoning using last-known position/heading and step/distance estimation And marks location confidence as Low and displays a "Low GPS" indicator And continues ETA updates at least every 30 seconds And in a 5-minute walking test, cumulative distance error is ≤ 15% versus ground truth And upon GPS recovery, position is reconciled with smoothing so visual jump ≤ 50 m
Background Operation & Reliability
Given the app is backgrounded with an active route while offline When the caregiver travels for 3 hours with the screen off and occasional app switches Then ETA computation and drift detection continue without user interaction And local notifications for drift are delivered within 5 seconds of detection And drift events and ETA history persist across app restarts and OS process kills And after an OS kill, services resume within 10 seconds of relaunch or the next location event
Battery and CPU Resource Constraints
Given a 60-minute offline navigation session on a mid-tier device (e.g., Pixel 5 or iPhone 12) When CPU and battery are profiled for the Offline Sentinel processes Then average CPU utilization for the background service is ≤ 20% with bursts ≤ 50% lasting < 2 s And additional battery drain attributable to the app is ≤ 3% per hour while moving and ≤ 1% per hour while stationary And location/routing sampling adapts (e.g., lower frequency when stationary) to maintain these limits
Offline Event and EVV Queueing & Sync
Given the device is offline during visit start/stop, drift detections, and ETA updates When EVV stamps, drift events, and ETA history are generated Then each item is written to a durable FIFO queue with monotonic timestamps and unique IDs And the queue survives app restarts and device reboots And upon connectivity restoration, all queued items sync within 10 seconds in original order And retries use exponential backoff up to 5 attempts with idempotent semantics (no duplicates server-side) And any unsynced items surface a "Sync Failed" badge with a manual retry action
Offline EVV Capture & Tamper-Evident Queue
"As a compliance officer, I want EVV records captured and secured offline so that off-grid visits remain audit-ready and legally compliant."
Description

Capture EVV artifacts offline, including visit start/stop timestamps (using a monotonic clock), GPS coordinates, geofence enter/exit, client signature, photos, voice notes, and optional IoT sensor readings. Store events in an append-only, encrypted queue with hash chaining and device-bound keys to ensure tamper evidence. Prevent edits to signed EVV entries; allow addenda with linkage. On reconnect, auto-sync in order with idempotency keys and duplicate detection. Validate minimum data required for state EVV compliance and flag deficiencies for user remediation. Handle partial sync failures with per-item retries and clear status.

Acceptance Criteria
Offline EVV Start/Stop Capture Without Connectivity
Given the device has no internet connectivity, when a caregiver taps Start Visit inside the assigned geofence, then the visit-start EVV event is recorded with a monotonic timestamp Tstart and GPS coordinates with accuracy ≤ 50 meters or a "location_unavailable" flag if a fix is not possible. Given an active visit, when the caregiver taps Stop Visit, then the visit-stop EVV event is recorded with a monotonic timestamp Tstop such that Tstop > Tstart and duration = Tstop - Tstart is stored. Given cached geofences, when the caregiver crosses the boundary, then geofence enter/exit events are recorded offline with monotonic timestamps. Then no network calls are attempted; all events are persisted to the local queue immediately with status = "Queued".
Tamper-Evident Append-Only Queue with Hash Chaining
Given a new EVV event is created, when it is enqueued, then it is encrypted at rest with a device-bound key and includes a content hash and previous-item hash to form an unbroken chain. Given the queue contains N items, when any previously enqueued item is modified or deleted via storage tampering, then chain verification fails, the system marks the queue as "Integrity Error", blocks sync of affected items, and surfaces an alert and audit log entry. Given the app restarts or the device reboots, when the queue is reloaded, then chain verification succeeds and head/tail indices are unchanged. Given the encrypted queue file is copied to another device, when decryption is attempted, then it fails due to the key being bound to the original device.
Signed EVV Entry Immutability with Addenda
Given a visit has required EVV artifacts and the client signature is captured, when the caregiver marks the visit as Signed, then the EVV entry becomes read-only in UI and API, and any edit attempt returns an error and is audit-logged. Given a signed EVV entry, when a user adds an addendum, then the addendum requires reason and author, is timestamped, cryptographically linked to the parent entry, and appears in reports and audit logs without altering the original entry. Then signed fields (timestamps, GPS, signature image, EVV stamps) remain unchanged across sessions and exports.
Ordered Auto-Sync with Idempotency and Duplicate Detection
Given the device reconnects to the network, when the sync job starts, then queued items are posted in original FIFO order and each request includes a unique idempotency key per item. Given an item was already accepted by the server, when the same idempotency key is retried, then the server responds without creating a duplicate and the client marks the item as Synced exactly once. Given a transient failure (e.g., HTTP 5xx or timeout), when retrying, then the client uses per-item exponential backoff with jitter up to 5 minutes and updates per-item statuses (Queued, Syncing, Synced, Failed) in the UI. Given connectivity drops mid-sync, when connectivity is restored, then sync resumes from the last unconfirmed queue index without skipping or reordering items.
EVV Compliance Validation and Deficiency Flags
Given a visit is being completed offline, when validation runs, then the system checks for minimum state-specific EVV fields (caregiver ID, client ID, service code, start/stop monotonic timestamps, location evidence within geofence tolerance or documented exception, and signature if required) using the locally cached rule set and version. When any required field is missing or invalid, then the system displays a deficiency checklist with field-level errors and remediation hints, prevents submission if mandatory fields are missing, and allows capturing the missing artifacts offline. When validation passes, then the visit is marked "Compliance-Ready" and prioritized for sync. When rules are updated after reconnect, then previously "Compliance-Ready" items are revalidated and any new deficiencies are flagged before final submission.
Queue Durability Across Reboots and Low Storage
Given the device is force-restarted or the app is force-closed during an offline visit, when the app is reopened, then all queued items persist intact with original order and verified integrity, and the active visit context is restored. Given free storage drops below 50 MB, when new media artifacts (photos, audio) are captured, then the app warns the user, enforces a 10 MB reserve for core EVV events, and if the reserve would be exceeded, it blocks new media captures while still allowing core EVV start/stop events to be recorded. Given a write interruption occurs during queue persistence, when the app restarts, then no partial or corrupt item is accepted as valid; the item is either retried from scratch or marked Failed with a recovery prompt.
Offline Capture of Signatures, Media, and IoT Sensor Readings
Given no connectivity, when the caregiver captures a client signature, then the signature image and signer metadata (name, time, device ID) are stored locally, linked to the active visit, and enqueued with the EVV record. Given photos or voice notes are captured offline, when saved, then files are written to encrypted app storage, checksummed, linked to the visit, and offline thumbnail/transcript jobs are queued if available. Given a paired IoT sensor is in range, when readings are received during the visit, then readings are timestamped with the monotonic clock, cached, associated with the visit, and enqueued; if the sensor disconnects, the system logs the gap with reason. Then all artifacts are viewable in the visit timeline offline and appear unchanged in synced reports after upload.
Action Queue with Idempotency & Conflict Resolution
"As a caregiver, I want my updates to save offline and sync correctly so that nothing is lost or duplicated when I reconnect."
Description

Queue all user actions performed offline (e.g., note edits, task completion, medication logs, attachments) with deterministic IDs and causal ordering. Apply idempotency tokens so replays do not duplicate records. On sync, reconcile against server state using field-level merge strategies, last-write-wins defaults, and user prompts for critical conflicts (e.g., medication administration). Provide per-action states (queued, syncing, retried, failed) and user-visible error recovery. Enforce queue size limits and surface storage usage with options to purge non-critical cached assets.

Acceptance Criteria
Offline Task Completion Queuing
Given the device is offline and a caregiver completes a visit task, When they tap Complete, Then the action is persisted to the local queue within 200 ms and the UI displays state "Queued". Given the app is force-closed or the device reboots, When the app restarts, Then the queued action is present with the same action_id and metadata. Given connectivity is restored, When background sync begins, Then the action transitions from "Queued" to "Syncing" within 3 seconds and on a successful server response is removed from the queue and the task shows as updated.
Deterministic ID Generation & Idempotent Replays
Given an offline action is created, When the action_id is generated from deterministic inputs, Then repeating the same operation produces the same action_id. Given the same action is enqueued twice, When the queue compares action_ids, Then only one entry remains in the queue. Given the client retries the same request after a timeout, When the server receives the same idempotency token, Then exactly one server-side record exists and the client shows no duplicate.
Causal Ordering Preservation
Given a user creates Note A, then edits it to A', then attaches Photo P while offline, When sync occurs, Then the server shows Note A' with Photo P and no intermediate out-of-order state. Given acknowledgements are received out of order, When dependent actions have unmet predecessors, Then the client defers applying them until predecessors succeed. Given actions target different records, When syncing, Then ordering is preserved per record but independent across records.
Field-Level Merge with LWW Default
Given local changes affect fields F1 and server changes affect fields F2 with no overlap, When syncing, Then the merged record contains all changes from F1 ∪ F2. Given the same field is changed locally and on the server, When timestamps are compared, Then the value with the later timestamp wins and the losing value is logged in the audit trail. Given timestamps are equal, When tie-breaking is needed, Then the action with the lexicographically greater action_id wins deterministically. Given a merge occurs, When the user opens conflict details, Then field-level winners and losers with timestamps and actors are visible.
Critical Conflict Prompt for Medication Administration
Given a medication administration record has conflicting local and server edits to dose, time, or status, When sync runs, Then auto-merge is blocked and a modal prompts the user to resolve the conflict. Given the conflict modal is shown, When the user reviews both versions, Then the modal displays both values, authors, and timestamps and requires an explicit selection or manual edit with a reason before proceeding. Given a critical conflict remains unresolved, When generating audit reports, Then the record is flagged "Needs Review" and excluded until resolved.
Per-Action States & Error Recovery
Given an action is queued, When sync starts, Then its state transitions Queued -> Syncing and is visible in the activity list. Given a transient error (HTTP 429/5xx/timeout) occurs, When retry policy applies, Then the state becomes "Retried" with exponential backoff (2s, 4s, 8s, up to 5 min) for up to 5 attempts. Given the maximum retry attempts are exceeded or a validation error occurs, When no further automatic retries are allowed, Then the state becomes "Failed" with an actionable error message and options "Retry Now" and "Discard" where permitted. Given the user taps "Retry Now", When the network is available, Then the state changes to "Syncing" within 1 second and either succeeds or returns to "Retried" per policy.
Queue Limits & Storage Management
Given the action queue has a configured limit of 5,000 actions or 250 MB, When usage reaches 80% of either threshold, Then the app displays a storage warning and shows a breakdown by category. Given the user taps "Purge Non-Critical Assets", When purging completes, Then cached maps, geofences, and unlinked attachments are removed, storage usage decreases, and no queued actions or EVV stamps are deleted. Given the queue is at capacity, When a new non-critical action is attempted, Then the app blocks the enqueue with a clear message and offers a purge flow. Given the queue is at capacity, When a critical action (EVV stamp, medication administration) is attempted, Then the app frees non-critical cache first and enqueues the critical action; if still blocked, it displays a high-priority alert to free space immediately.
Connectivity Watchdog & Background Sync
"As an operations manager, I want devices to sync automatically when they regain connectivity so that data stays current without manual steps."
Description

Continuously detect connectivity changes and online readiness (e.g., captive portal, DNS reachability, token freshness). When online, trigger prioritized background sync: EVV first, then queued actions, then cache refreshes. Respect OS background execution limits (iOS BGTaskScheduler, Android WorkManager), battery saver modes, and user data preferences (Wi‑Fi-only toggle, roaming avoidance). Implement exponential backoff with jitter, resumable uploads, and per-item failure isolation. Display last sync time and upcoming scheduled syncs, and only notify the user when intervention is required.

Acceptance Criteria
Online Readiness: Connectivity, Captive Portal, DNS, and Token Freshness
- Given the device network state changes, When the watchdog evaluates readiness, Then online=true only if all are true: TCP reachability to the health-check endpoint succeeds, DNS for the API host resolves within 1500 ms, the captive portal probe returns 204/200 without redirect, and the auth token TTL is >= 120 seconds or a silent refresh completes within 3 seconds. - Given a captive portal is detected, When readiness is evaluated, Then online=false and no sync requests are issued. - Given token TTL < 120 seconds, When readiness is evaluated, Then a token refresh is attempted before any sync; if refresh fails, online=false and sync is not started. - Given DNS resolution fails but IP connectivity exists, When readiness is evaluated, Then online=false and a retry is scheduled using backoff.
Prioritized Background Sync Order: EVV > Queued Actions > Cache Refresh
- Given online=true and pending EVV, queued actions, and cache refresh tasks, When background sync starts, Then EVV items are transmitted to completion (success or max retries) before any non‑EVV queued actions are attempted. - Given at least 1 EVV item and 10 non‑EVV actions pending, When sync runs, Then no non‑EVV network call is made until all EVV items have either succeeded or exhausted retry policy. - Given no EVV items remain and queued actions exist, When sync runs, Then queued actions are processed before any cache/map/geofence refresh calls. - Given online=true, When EVV items are present, Then the first EVV upload attempt begins within 5 seconds of readiness confirmation.
Respect OS Background Limits and User Data Preferences
- Given iOS with the app in background, When sync is scheduled, Then BGTaskScheduler is used with distinct identifiers for EVV and general sync and tasks only run within OS-provided windows. - Given Android with the app in background, When sync is scheduled, Then WorkManager uses constraints reflecting user settings: Wi‑Fi‑only=true => NetworkType.UNMETERED; roaming avoidance=true => disallow roaming; battery saver=on => no expedited jobs and defer until maintenance window. - Given Wi‑Fi‑only=true and the device is on cellular, When readiness is evaluated, Then online=false for sync purposes and no network requests are issued. - Given roaming avoidance=true and the device is roaming, When readiness is evaluated, Then no sync requests are issued. - Given battery saver=on, When background sync is due, Then network activity is deferred until the OS allows it; if the OS grants a window, only EVV tasks may run within that window.
Exponential Backoff with Jitter and Resumable Uploads
- Given a transient sync failure (HTTP 5xx, timeout, DNS error), When retries are scheduled, Then delays follow exponential backoff with full jitter: 2±20%, 4±20%, 8±20%, 16±20%, 32±20%, capped at 300±20% seconds. - Given a retry later succeeds, When computing the next schedule, Then the backoff resets to the base delay for future failures. - Given an upload larger than 1 MB, When a failure occurs after the server acknowledges N bytes, Then the client resumes from byte N without re-sending acknowledged bytes and the total re-sent data is ≤ 256 KB. - Given connectivity drops mid-upload, When connectivity returns within 10 minutes, Then the upload resumes without creating a new server object and the final checksum matches the original payload.
Per-Item Failure Isolation and Partial Success
- Given a batch of 10 items where 2 return HTTP 400 and 8 are valid, When sync runs, Then the 8 succeed, the 2 are marked requires intervention with error details, and only the 2 remain queued. - Given 1 item fails with HTTP 503 and others succeed, When retries are scheduled, Then only the failed item is retried using its own backoff without blocking other items or categories. - Given any item reaches the maximum retry attempts (e.g., 6), When the limit is reached, Then its status becomes stalled and it is excluded from further automatic retries until user intervention.
User-Facing Sync Status and Intervention-Only Notifications
- Given a background sync completes, When the user opens the Sync screen, Then Last sync displays the local timestamp accurate to the minute and updates within 2 seconds of completion. - Given the next background task is registered, When viewed on the Sync screen, Then Next sync displays an estimated time consistent with OS scheduling and current backoff state. - Given background sync completes successfully, Then no system notification or in-app banner is shown. - Given a sync item is stalled (max retries reached) or settings block sync (Wi‑Fi-only on cellular, captive portal detected), When this state is entered, Then exactly one actionable notification is shown within 10 seconds indicating the cause and resolution; no duplicate notifications for the same cause are shown within 24 hours.
EVV Upload Timeliness and Idempotency After Connectivity Restored
- Given at least one EVV stamp was queued while offline, When online=true is confirmed, Then the first EVV upload starts within 5 seconds and all queued EVV stamps complete within 60 seconds per 100 items on a 100 kbps link, subject to OS background allowances. - Given duplicate EVV stamps exist for the same visit and timestamp, When sync runs, Then duplicates are deduplicated client-side and not transmitted. - Given EVV uploads complete, When viewed in audit logs, Then each EVV shows server-received and device-captured timestamps and an idempotency key matching the client record.
Audit Trail and Integrity Verification
"As a compliance officer, I want verifiable proof of offline events so that audits can confirm integrity even for rural or basement visits."
Description

Create a cryptographic audit trail for offline activity by signing events with a device-bound key and maintaining a verifiable hash chain. On server receipt, verify chain continuity, timestamps within allowable skew, and signature authenticity; flag gaps or anomalies. Persist verification results in an immutable audit log. Provide a one-click, audit-ready report that clearly denotes offline periods, drift alerts, EVV provenance, and verification status. Ensure HIPAA-compliant storage, access controls, and export options with PHI redaction as configured.

Acceptance Criteria
Device-Bound Key Generation and Attestation
Given a new device registers with CarePulse When Offline Sentinel is initialized Then a non-exportable private key is generated in a hardware-backed keystore where available, else the OS secure keystore And the corresponding public key and key metadata (key ID, algorithm, attestation evidence if available) are registered to the server tied to the device ID and user And the server records successful attestation and binds the device key to the organization and user And any subsequent registration with the same device ID but a different key is rejected unless a key-rotation workflow with MFA approval is completed and logged
Offline Event Signing and Hash Chaining
Given the device is offline When the caregiver records an EVV event, note, or drift alert Then the event payload includes a monotonically increasing local sequence number, client timestamp, and prev_hash of the last committed event in the local chain And the event is signed with the device-bound private key prior to storage And the local chain persists across app restarts and OS reboots without data loss And events remain queued until the server acknowledges receipt; no event is dropped due to network unavailability And if local storage exceeds 90% of the configured queue capacity, the user is warned while recording continues without preventing EVV capture
Server-Side Verification of Chains
Given connectivity is restored and a chain is uploaded When the server processes the submission Then each event signature is validated against the registered device public key And chain continuity is verified (prev_hash equals the hash of the prior event) with no missing sequence numbers And event timestamps are within a configurable skew (default ±5 minutes) relative to server time and are non-decreasing within a chain And a verification status is assigned per event and per chain (Verified, Anomaly, Rejected) with reason codes available via API and UI
Anomaly Detection and Flagging
Given uploaded chains contain irregularities When the server detects invalid signatures, duplicate events, missing sequence numbers, out-of-order timestamps, unapproved key changes, or on-site EVV with geofence drift above a configurable threshold Then the affected events and chains are flagged with anomaly type, severity, and details And impacted visits are marked for compliance review in the console with filters and counts And anomalies are included in verification results and downstream reports
Immutable Audit Log Persistence
Given verification completes When the server records verification results Then entries are appended to an immutable, append-only audit log backed by a tamper-evident server-side hash chain or WORM storage controls And each entry includes actor, device ID, key ID, event hash, verification status, reason codes, and server-side timestamps And any correction or reprocessing creates a new entry that references the prior entry; prior entries remain unchanged And audit log retention follows the configured policy and supports export with an integrity proof (log head hash) for the selected period
One-Click Audit-Ready Reporting with PHI Redaction
Given a compliance admin selects a visit or date range in CarePulse When Generate Audit Report is initiated Then a report is produced within 10 seconds for up to 1,000 events that clearly denotes offline periods, EVV provenance (online vs offline), drift alerts, chain IDs/head hashes, and verification status per event And the report supports PDF and CSV exports And PHI redaction is applied per org configuration (e.g., names masked, notes redacted) and is visibly labeled in the output And the report includes a redaction audit section listing which fields were redacted and the active policy version
HIPAA-Compliant Storage and Access Controls
Given audit artifacts are stored and accessed When any user views or exports audit logs or reports Then access is enforced via role-based permissions with least-privilege defaults; only authorized roles can view unredacted PHI And all access and exports are logged with user, timestamp, IP, and purpose-of-use metadata And data at rest is encrypted with managed keys; data in transit uses TLS 1.2+; export links require authentication, expire within 15 minutes, and can be revoked And a quarterly access review report can be generated listing users, roles, and last access to audit data
Offline Status UI & Guidance
"As a caregiver, I want clear indicators and prompts when I’m offline so that I know what works now and what will sync later."
Description

Expose clear UI indicators for connectivity state, cache readiness, queued item counts, and last sync time at both global and per-visit levels. Provide contextual guidance when required assets are missing (e.g., maps outdated) and offer manual actions (prefetch now, retry sync). Gate or adjust features that cannot function offline with transparent messaging about deferred behavior. Ensure accessibility compliance (WCAG AA) and localization for key languages. Log user-visible offline states for support diagnostics.

Acceptance Criteria
Global Connectivity Status Indicator
Given the device transitions between online and offline states When the network status changes Then a global status banner and icon update within 2 seconds to one of: Online, Offline, Syncing, or Sync Pending And the indicator is visible on Home, Visit List, Visit Detail, and Navigation screens And tapping the indicator opens a status sheet showing current state, queued count, and last sync time
Per-Visit Offline Readiness & Prefetch
Given a scheduled visit card is rendered Then it displays an Offline Readiness badge with Route, Map Tiles, Geofence, and Form Template marked Ready or Missing And map tiles are stale if older than 7 days or if coverage radius is < 250 m around the visit location When the user taps Prefetch now Then missing assets are downloaded with a 0–100% progress indicator and the badge updates to Ready upon completion And on failure, an error with reason code and a Retry control is shown When offline and any required asset is Missing Then Start Visit is disabled with helper text: Required offline assets missing—prefetch when online
Queued Actions and EVV Visibility
Given the device is offline When the user performs visit actions (check-in EVV, check-out EVV, note, photo, signature) Then each action is queued locally with timestamp, GPS (if available), and a unique ID And the global header shows a queue badge with total queued count And the visit detail shows the per-visit queued count When connectivity is restored Then auto-sync starts within 5 seconds, items send FIFO, counts update in real time, and successful items are removed If any item fails to sync Then it is marked Failed with a reason and a Retry sync option; after 3 consecutive failures a banner advises support steps And queued data persists across app restarts
Last Sync Time and Manual Sync Control
Given the app has synchronized successfully at least once Then a Last sync timestamp is shown in the status sheet and visit detail, formatted in device locale and timezone If no sync has occurred Then Last sync displays Never When online and the user taps Sync now Then an immediate sync runs for metadata and the queue, and the timestamp updates within 2 seconds after success When offline Then Sync now is disabled with helper text: Unavailable offline
Feature Gating with Transparent Messaging
Given the device is offline and a feature requires connectivity (e.g., live traffic navigation, template download, secure chat) Then the control is disabled and displays an inline message stating the reason and whether the action will be deferred or unavailable If the action is deferable (e.g., notes, photos) Then the UI labels Will sync when online, queues the item, and the queued count increases If the action is not deferable (e.g., new template download) Then the UI offers Prefetch now when online or Try again when online when offline and no queue entry is created And no flow produces a dead-end; users can navigate back in one step
Accessibility and Localization for Offline UI
Given offline/online indicators and controls are displayed Then each has accessible names, roles, and states; state changes are announced via an ARIA live region; focus order is logical; all controls are operable via keyboard/switch And color is not the sole conveyor of state; contrast ratio meets WCAG 2.1 AA (>= 4.5:1); touch targets are >= 44x44 dp And offline-related strings and date/time formats are localized for English (en), Spanish (es), and French (fr), with correct pluralization; no clipped or truncated translations in supported viewports When device language or in-app language changes Then offline UI strings reflect the selected locale by next app launch; untranslated strings fall back to English
Support Diagnostics: Logging of Offline States
Given any global or per-visit offline-related state changes (Online, Offline, Syncing, Sync Pending; cache Ready/Missing; prefetch start/success/failure; queue count change) Then the app records a structured log event with fields: event_type, timestamp (UTC), user_id (hashed), device_id, app_version, visit_id (optional), state, queued_count, last_sync And logs are stored locally encrypted at rest and uploaded within 60 seconds of connectivity restoration or on Sync now And no PHI is included; only hashed identifiers; server-side retention is 30 days And a unique correlation ID is attached to each sync session and included in related events

JIT Elevate

One‑tap, context‑aware permission elevation that grants the least access needed for the task at hand—limited to the specific client, chart section, and time window. Elevations auto‑revoke on timer or task completion and can require lightweight approval for higher‑risk scopes. Caregivers resolve urgent triage issues without waiting on admins, while compliance teams get safer, tighter access by default.

Requirements

Context-Aware Least-Privilege Scoping
"As a caregiver, I want the system to grant just the minimal access for the client and task I’m performing so that I can resolve the issue quickly without overexposing patient data."
Description

Implements a rules-driven scope engine that derives the minimum permissions required from the user’s current context (client, chart section, action, route step, and time). It composes a temporary, constraint-bound token (resource + permitted actions + client_id + section_id + duration cap) that is valid only within that scope. Integrates with existing RBAC/ABAC, respects role baselines, and supports policy-injected constraints such as after-hours limits. Exposes SDK hooks to request or pre-check elevation and fails closed with clear messaging and safe fallbacks to non-elevated flows.

Acceptance Criteria
Token Scope Derivation for Assigned Client Chart Section
Given a caregiver is on a scheduled route step for client_id=X and viewing the Vitals section of that client’s chart When the caregiver taps JIT Elevate to record vitals and the scope engine evaluates (user_id, role, client_id=X, section_id=Vitals, action=create, timestamp) Then the system issues a temporary token whose claims include resource.client_id=X, resource.section_id=Vitals, permitted_actions ⊆ {read, create}, duration ≤ min(15 minutes, remaining route-step time) And the token cannot be used for any other client_id or section_id And any API call using the token outside the permitted_actions or resource scope is rejected with 403 scope_violation And the token is signed, opaque to the client, and contains an absolute expiry timestamp
Auto-Revocation on Timer or Task Completion
Given a JIT token with duration=T minutes was issued for client_id=X, section_id=Vitals, action=create When either T minutes elapse or the associated route step is marked completed, whichever occurs first Then the token becomes invalid and subsequent requests with the token return 401 expired or 403 revoked And the mobile SDK receives a revocation event and hides elevated actions within 2 seconds And an audit entry records revoked_at, revocation_reason ∈ {timer_expired, task_completed}
Baseline Role Constraints Enforcement
Given a user with baseline role=Caregiver lacks Medication Edit privileges When the user requests elevation that would include action=update on section_id=Medication for client_id=Y Then no token is issued that includes actions outside the role’s JIT-allowed set And the pre-check returns allowed=false, approval_required=false, denial_reason=exceeds_role_baseline, and a safe_fallback action And the UI displays a clear message and enables the non-elevated flow without exposing restricted data
Policy-Injected After-Hours Constraints
Given an after-hours policy limits write access duration to 5 minutes after 20:00 local time and blocks Diagnosis updates without approval When a user requests elevation at 21:00 for section_id=Vitals, action=create Then pre-check returns allowed=true with required_scope.duration=5 minutes and includes policy_id applied When a user requests elevation at 21:00 for section_id=Diagnosis, action=update Then pre-check returns allowed=false, approval_required=true with reason=after_hours_block and no token is issued until approval And all decisions are logged with policy_id, decision, and timestamps
SDK Pre-Check and Request Hooks
Given the mobile app calls preCheck(context={user_id, client_id, section_id, action, timestamp}) When the scope engine evaluates the context Then preCheck responds in p95 < 200 ms with fields: allowed (bool), approval_required (bool), required_scope {resource, permitted_actions, max_duration}, denial_reason (code), safe_fallback (action or route) When the app calls requestElevation with the same context and allowed=true and approval_required=false Then a token is returned and is usable immediately for in-scope API calls And if approval_required=true, the SDK triggers the approval flow callback and no token is issued until approved
Fail-Closed Behavior with Clear Messaging and Fallback
Given a network error, policy denial, or evaluation exception occurs during JIT elevation When the caregiver taps JIT Elevate Then no elevated privileges are granted (no token issued or accepted) And the UI shows an error message within 1 second: "Cannot elevate access: <reason>" with a safe fallback option that continues using baseline permissions And sensitive data/actions remain inaccessible; all attempted out-of-scope requests return 403 And the incident is logged with error category, reason code, and correlation_id
One-Tap Elevation Prompt (Mobile-First)
"As a caregiver, I want a clear one-tap prompt to request temporary access so that I can continue my workflow without hunting through settings or calling an admin."
Description

Delivers an inline, single-tap prompt when an action is blocked, summarizing the requested scope, risk level, max duration options, and affected client/section. Supports quick justification via short text or a 5‑second voice note, and indicates if approval is required before proceeding. Optimized for small screens with large touch targets, accessibility support (screen readers, haptics), offline string caching, and localization. Integrates seamlessly with existing CarePulse UI patterns and visit workflows.

Acceptance Criteria
Inline Prompt on Blocked Action in Visit Workflow (Mobile)
Given a caregiver is in a client's visit workflow on a mobile device When they attempt an action that is blocked by current permissions Then an inline elevation prompt appears within the current screen (no full-screen takeover or navigation) And the prompt renders within 800 ms on mid-tier devices And the primary action is a single, large tap target labeled appropriately (e.g., "Request Temporary Access") And duplicate taps within 500 ms are ignored (debounced) And if the prompt fails to initialize, an inline error message is shown with a retry option
Scope Summary, Risk, and Duration Options Presented
Given the elevation prompt is displayed Then it summarizes the requested scope including: client full name, chart section, and permissions being elevated And it displays a textual risk level (e.g., Low, Medium, High) with an explanatory tooltip or info link And it shows pre-configured duration options up to the maximum allowed (e.g., 5 min, 15 min, 60 min, Max) And the minimal duration option is selected by default And the user’s selected duration is included in the elevation request payload
Quick Justification via Text or 5-Second Voice Note
Given the elevation prompt is displayed Then a short text input is available and limited to a maximum of 200 characters And a voice note control allows recording a justification up to 5.0 seconds When microphone permission is denied or revoked Then the UI clearly indicates voice capture is unavailable and keeps the text option available When recording starts Then a visible timer/countdown appears and recording auto-stops at 5.0 seconds And submission is allowed when at least one justification method is provided if required by policy, otherwise optional
Approval Requirement Indicator and Flow
Given the system determines whether approval is required for the requested scope Then the prompt visibly indicates one of two states before proceed: - No approval required: primary action label reads "Elevate Now" and tapping it immediately submits the request - Approval required: a badge or banner states "Approval required" and the primary action label reads "Send Approval Request" When approval is required and the user submits Then the UI shows a pending state without granting access and indicates next steps When no approval is required and the user submits Then the UI shows a success state and displays the selected time window (e.g., a countdown chip)
Accessibility: Screen Readers, Haptics, Touch Targets
Given a device with screen reader enabled (VoiceOver/TalkBack) Then all prompt elements have meaningful accessibility labels and roles And focus order moves logically from title → scope summary → risk → duration options → justification → primary/secondary actions And the scope summary is announced with client, section, risk, and duration in a single, comprehensible announcement And all tappable controls meet minimum size (≥44 pt on iOS, ≥48 dp on Android) And text and icon contrast ratios meet WCAG AA (≥4.5:1 for normal text) When the prompt appears and when the elevation request is submitted Then a light haptic feedback is triggered
Offline String Caching and Resilient Rendering
Given the device is offline When a blocked action triggers the elevation prompt Then the prompt renders using cached localized strings with no missing-key placeholders And the UI clearly indicates offline status without crashing or blocking basic interactions (e.g., selecting duration, entering justification) And no network call is attempted until the user submits or retries an action requiring connectivity
Localization and RTL Support
Given the device locale is set to a supported language (e.g., en-US, es-ES) Then all prompt strings appear localized, including risk labels and duration units And pluralization and numeric/time formats match the locale conventions Given the device is set to an RTL locale (e.g., ar) Then the layout mirrors appropriately and text alignment follows RTL rules without clipping or overlap And no text truncation occurs on small screens at default font size
Time-Bound and Event-Based Auto-Revocation
"As a compliance officer, I want elevated access to automatically end on a timer or when the task is done so that we minimize risk and stay within least-privilege principles."
Description

Automatically revokes elevated access on countdown expiry or upon task completion signals such as note saved, route step completed, or chart section closed. Enforces hard caps, runs a background watchdog to clean up stale grants, and supports offline-safe TTLs that revoke upon reconnect. Emits revocation events to clients and server, records precise timestamps, and ensures no residual elevated session or cached data remains after revocation.

Acceptance Criteria
Auto-Revocation on Timer Expiry
Given a caregiver has an active JIT elevation with TTL T for a specific client and chart section When the TTL reaches zero Then the server marks the elevation revoked within 1 second of expiry And subsequent API calls requiring the elevated scope return 403 Forbidden And the client receives a revocation event and removes elevated UI/actions within 5 seconds of expiry And an audit record is written with elevation_id, user_id, scope, reason=ttl_expired, and revocation_at in UTC ISO 8601 with millisecond precision
Auto-Revocation on Note Save
Given an elevation is granted solely to complete a visit note for a client And the caregiver is editing the note under elevated permissions When the caregiver taps Save and the note is successfully persisted Then the elevation is revoked within 1 second of the save acknowledgment And any further edit operations that require elevated access return 403 until a new elevation is granted And a revocation event is emitted with reason=task_complete and task_type=note_save, including revocation_at (UTC, ms)
Auto-Revocation on Route Step Completion
Given an elevation is scoped to a specific route step for a client visit When the caregiver marks that route step as Completed and the completion is acknowledged by the server Then the elevation is revoked within 1 second of the completion acknowledgment And repeated or duplicate completion events do not recreate or extend the elevation (idempotent) And any API calls to elevated endpoints for that scope return 403 after revocation And an audit record is written with reason=task_complete and task_type=route_step_completed, including revocation_at (UTC, ms)
Hard Cap Enforcement on Elevation Duration
Given a system-configured hard cap H for any single elevation And an elevation is created with a base TTL less than or equal to H When extensions, pauses, or task progress would otherwise keep the elevation active beyond H Then the elevation is force-revoked exactly at H And any attempts to extend beyond H are rejected with a clear error code (e.g., ELEVATION_HARD_CAP) and 400/409 status And the revocation is logged with reason=hard_cap and revocation_at (UTC, ms) And subsequent privileged calls in that scope return 403
Watchdog Cleanup of Stale Grants
Given a background watchdog runs at a configured interval W And one or more elevations remain active past their TTL due to missed client signals or network issues When the watchdog cycle runs Then all elevations with expiry < now are revoked within that cycle (≤ W) And each revocation emits a revocation event with reason=watchdog and revocation_at (UTC, ms) And metrics capture the count of stale grants cleaned in the cycle And post-revocation, API calls using those elevations return 403
Offline-Safe TTL Revocation on Reconnect
Given a caregiver’s device goes offline while a JIT elevation is active And the elevation’s TTL expires while offline When the device reconnects to the server Then the server immediately treats the elevation as revoked and returns 403 to any elevated-scope requests And the client receives a revocation event on reconnect and removes elevated UI/actions within 3 seconds And any queued privileged writes with timestamps after the TTL expiry are rejected with a clear error and logged with reason=ttl_expired_offline And an audit record reflects the original expiry time as revocation_at (UTC, ms)
Revocation Events, Timestamps, and Zero Residual Access
Given any elevation is revoked (timer expiry, task completion, hard cap, or watchdog) Then a revocation event is published to server event streams and subscribed clients at least once within 1 second, with payload including elevation_id, user_id, scope, reason, revocation_at (UTC, ms), and correlation/request_id And access tokens/session attributes tied to the elevation are invalidated immediately and cannot authorize privileged endpoints And clients clear in-memory and on-disk caches for data obtained solely under the elevated scope within 5 seconds of revocation And background jobs or sync processes attempting elevated operations after revocation abort or downgrade and are logged And spot-check authorization for the user confirms no residual elevated scopes remain
Lightweight Approval & Escalation
"As an on-call supervisor, I want to approve or deny high-risk elevation requests quickly with context so that urgent care is unblocked while maintaining control."
Description

Provides a tiered approval workflow for higher-risk scopes based on policy rules such as scope size, PHI exposure, and after-hours. Routes requests to on-call approvers with push/in-app notifications, supports one-tap approve/deny with SLA timers, and includes escalation chains and auto-expire. Offers emergency override flags with stricter auditing and shorter caps. Binds approvals cryptographically to the exact requested scope and duration to prevent over-broad grants.

Acceptance Criteria
Tiered Policy-Driven Approval Classification
Given policy rules: Tier1 = auto-approve when PHI=Low AND within business hours; Tier2 = single approver when PHI=Medium OR after-hours; Tier3 = supervisor approver when PHI=High OR scope spans multiple chart sections And a request with clientId=A, chartSection=Medications, duration=20 minutes, PHI=High, afterHours=true When the system evaluates the request against policy Then the request is classified as Tier3 and is not auto-approved And an approval request is created targeting approvers with role=Supervisor And the approval request metadata records evaluated factors (PHI=High, afterHours=true, scope=Medications) and tier=3
On-Call Routing, One-Tap Approval, and SLA Escalation
Given an on-call schedule: Primary=P, Secondary=S for role=Supervisor And SLA response time=3 minutes, max escalation levels=2, request expiry=10 minutes When a Tier3 approval request is created at 22:05 local time Then P receives push and in-app notifications within 5 seconds And the approval card presents one-tap Approve and Deny actions And if P takes no action within 3 minutes, the request escalates to S and S is notified within 5 seconds And if no approver acts by 10 minutes, the request auto-expires and the requester is notified immediately
Cryptographic Binding of Approval to Scope and Duration
Given a pending Tier2 request with scope: clientId=A, chartSection=Medications, duration=15 minutes starting at approval time When the approver taps Approve Then the system issues a signed elevation token containing requesterId, approverId, clientId, chartSection, startTime, endTime, requestId, and riskTier And the token signature validates against the service public key And attempts to use the token for any clientId≠A or chartSection≠Medications or after endTime return HTTP 403 and create an audit entry And any token with modified fields or invalid signature returns HTTP 401 and creates an audit entry
Deny Flow and Audit Completeness
Given an approver opens a Tier2 approval request When they tap Deny and optionally enter a reason Then the requester is notified with the deny reason within 5 seconds And the request status becomes Denied and cannot be used to request access without a new submission And an immutable audit record is written capturing requestId, requesterId, approverId, decision=Denied, reason, timestamps, tier, and policyVersion And the audit record is available in compliance reports within 1 minute
Emergency Override With Short Caps and Stricter Auditing
Given a caregiver flags a request as Emergency Override for clientId=A, chartSection=Allergies And policy defines emergencyCap=15 minutes and postIncidentReviewRequired=true When the caregiver submits the emergency request after-hours Then the system grants immediate provisional access within 2 seconds limited to clientId=A, chartSection=Allergies, and 15 minutes max And high-priority alerts are sent to on-call compliance and supervisor And additional audit fields are captured: overrideJustification, location (if available), deviceId, mediaChecksum, and override=true And a supervisor attestation is required within 24 hours; if not completed, the case is auto-flagged Overdue in the compliance dashboard
Auto-Expire and Cleanup of Stale Requests
Given a Tier2 approval request is pending with expiry=10 minutes When no approver acts within 10 minutes Then the request transitions to status=Expired And all approval action links/buttons become inactive And the requester is notified to re-submit if access is still needed And expired requests are removed from active approver queues and retained in audit logs
Fault Tolerance and Notification Retry
Given a Tier3 request is created and the push notification gateway is unavailable When the system attempts to notify the on-call approver Then it retries with exponential backoff for up to 2 minutes and falls back to SMS or voice per policy And upon first successful delivery, the SLA timer starts from the delivery timestamp And all notification attempts and outcomes are logged with channel, timestamps, and result
Audit Logging & Compliance Reports
"As a compliance auditor, I want a complete, exportable history of every elevation event so that I can demonstrate adherence to policy during audits."
Description

Captures immutable, tamper-evident logs for every elevation request, grant, denial, approval, and revocation, including user, role, device, location (if permitted), scope details, timestamps, and rationale. Links events to visits, notes, and clients for operational context. Generates one-click, audit-ready reports filterable by timeframe, user, client, risk level, and outcome, with CSV/PDF export and API access. Applies retention and redaction policies aligned with HIPAA and agency-specific requirements.

Acceptance Criteria
Tamper‑Evident Logging for Elevation Lifecycle
Given a caregiver submits a JIT elevation request for a specific client, chart section, and time window When the request is created, approved or denied, granted, and auto- or manually revoked Then an append-only audit event is recorded for each lifecycle step with fields: event_id, correlation_id, event_type, user_id, user_role, device_id, app_version, ip_address, location (only if device permission=true and agency policy allows), client_id, visit_id (if applicable), note_id (if applicable), scope (client, chart_section, time_window), rationale, risk_level, created_at_utc, created_at_local, hash, prev_hash, signature And the hash chain validates for all events in the correlation sequence And attempts to modify or delete any audit event are blocked (WORM) and produce a tamper_alert event within 5 seconds And all events are queryable in read APIs within 2 seconds of commit
Operational Context Linking (Visits, Notes, Clients)
Given an elevation occurs while a visit or note is active When the audit event is viewed in the report Then visit_id and/or note_id are present and link to the associated visit/note in CarePulse And if no active visit exists, the event links to the client profile And all lifecycle events share a correlation_id that is displayed and filterable
One‑Click Audit Report with Advanced Filters
Given a compliance user selects filters (timeframe ≤ 30 days, one or more users, client(s), risk level(s), outcome(s)) When Generate Report is clicked Then the report renders within 5 seconds for up to 10,000 matching events And results apply AND logic across selected filter facets And the applied filters, timezone, total count, and last generated time are displayed And pagination is supported with deterministic sort by server_timestamp desc and stable page tokens And an empty result shows "No events found" without errors
CSV/PDF Export and Audit API Parity
Given a generated report with up to 50,000 events When the user exports CSV or PDF Then files include all displayed columns plus checksum and are delivered within 15 seconds And CSV conforms to RFC 4180 (UTF-8 with BOM); PDF is tagged and accessible (WCAG 2.1 AA reading order/headings) And download URLs are single-use, signed, and expire in 15 minutes And the Audit Events API provides equivalent filters, cursor pagination, and fields; requires OAuth2 scope audit.read; supports ETag caching; returns 429 when rate limits are exceeded; and includes checksum for bulk downloads
Retention, Redaction, and Legal Holds (HIPAA-aligned)
Given agency retention is configured (default 6 years; cannot be set below the regulatory minimum) When events reach retention expiry without an active legal hold Then events are irreversibly purged within 24 hours and a purge_summary audit event is recorded And legal holds pause deletion until released, with hold reason, owner, and timestamps recorded And PHI is redacted per role and policy in exports/API (e.g., location, device identifiers, free-text rationale) with [REDACTED] markers and redaction_reason field And if device location permission is off or policy forbids, location is omitted from stored events and exports And all retention/redaction configuration changes are versioned and auditable
Access Control and Tenant Isolation for Audit Data
Given a user without audit permissions attempts to view or export audit data When the request is made Then access is denied with HTTP 403 and an access_denied audit event is logged And only Agency Admin and Compliance roles with audit.report.view scope can access audit data And users can access only their own agency’s data (tenant isolation enforced at query level) And exporting or API token use requires MFA when the agency policy is enabled
Time Consistency and Offline Capture
Given a device is offline or has clock skew When elevation lifecycle events occur Then the client queues events locally with client_timestamp and securely transmits them on reconnect And the server assigns server_timestamp on receipt; reports sort by server_timestamp while preserving client_timestamp and drift And queued events sync within 60 seconds of connectivity restoration And all timestamps include timezone offset and UTC equivalents
Admin Policy Console & Templates
"As an admin, I want to configure and simulate elevation policies by role and scenario so that access is consistent, safe, and tuned to our agency’s operations."
Description

Delivers an admin interface to define, manage, and simulate elevation policies: allowed scopes per role, max durations, approval thresholds, after-hours rules, geofencing, and device trust requirements. Provides prebuilt templates for common roles and scenarios (e.g., triage, note correction), a dry‑run simulator to test policy outcomes, change history with audit trails, and feature flags for gradual rollout. Exposes APIs for policy import/export and CI-style validation.

Acceptance Criteria
Create and publish a policy with full scope constraints
Given an admin with Policy Admin permissions When they create a new policy with all required fields: unique name, target roles, allowed scopes (client and chart sections), max elevation duration (1–120 minutes), approval threshold (0–2 approvers), after-hours rule, geofence radius (50–5000 meters), and device trust requirement Then the Save action is enabled and inline validation messages appear for any missing or out-of-range values Given the policy passes validation When the admin clicks Publish Then the policy status becomes Active and is applied by the decision engine within 60 seconds of publish Given a duplicate policy name within the same organization When the admin attempts to save Then the system blocks the save and displays “Policy name must be unique”
Versioning, draft editing, and safe rollout of policy changes
Given a published policy v1 When an admin selects Edit Then a new Draft version v2 is created and v1 remains live until v2 is published Given a Draft version exists When the admin publishes v2 Then v2 becomes Active and v1 is retained as read-only Archived Given an Active policy When the admin attempts hard delete Then the system prevents deletion and offers Deactivate instead Given any publish, deactivate, or revert action When it completes Then an audit entry is recorded with actor, timestamp (UTC ISO 8601), action, reason, and a field-level diff of changes
Use and customization of prebuilt policy templates
Given the console is opened When the admin views templates Then the following templates are available: Urgent Triage, Note Correction, Supervisor Assist, Onboarding QA Given a template is selected When the admin clicks Clone Then a Draft policy is created with the template’s defaults and a required new name Given a template When the admin attempts to edit it directly Then the system prevents changes and instructs the admin to Clone Given a cloned Draft from a template When the admin modifies parameters and publishes Then the policy appears under Active with the admin’s customizations
Dry-run simulator produces deterministic decision and rationale
Given input context {role, client, chart section, requested duration, local timestamp, device trust state, GPS location} When the admin runs a simulation Then the result is one of {Allow, Allow with Approval, Deny} and includes: matched policy id and version, evaluation steps, effective max duration, approver roles (if any), and reasons Given multiple policies could match When simulated Then tie-breaking follows specificity > priority > latest publish time, and identical inputs always return the same result Given simulation requests at P95 load When executed Then 95% of responses complete within 300 ms and have no side effects on live policies or permissions Given an invalid or incomplete input When simulated Then the simulator returns a 400 validation error describing the missing or invalid fields
Change history and exportable audit trails
Given any create, update, publish, deactivate, revert, import, or export action When performed Then an immutable audit record is stored with actor, actor id, timestamp (UTC ISO 8601), entity id, action, before/after snapshot hash, and correlation id Given the audit log view When filtered by date range, actor, action, or policy id Then results reflect the filters and can be exported as CSV and JSON Given an export is requested When generated Then file downloads succeed only for authorized admins and include a cryptographic integrity checksum Given default retention settings When queried Then audit logs are available for at least 12 months
Feature flags for gradual rollout and kill switch
Given a feature flag targeting the console and decision engine When the admin configures audience by organization, group, role, and percentage (0–100%) Then only targeted users receive the new behavior Given a progressive rollout plan When the percentage is increased Then changes take effect within 60 seconds and are logged with old and new values Given a production incident When the kill switch is toggled off Then the previous behavior is restored within 60 seconds and all dependent services honor the flag state Given the simulator When a flag context is selected Then simulation respects the selected flag state
Policy import/export APIs and CI-style validation
Given authenticated API access via OAuth2 client credentials When calling POST /policies/validate with a JSON or YAML payload up to 1 MB containing 1–100 policies Then the response is 200 with pass/fail and per-policy errors; invalid payloads return 400 with field-level messages Given an idempotency key When calling POST /policies/import Then repeated requests with the same key do not create duplicate policies and return the original result id Given existing policies When calling GET /policies/export Then the API returns a signed JSON or YAML document with schema version, policies, and a checksum Given rate limiting of 60 requests per minute per client When the limit is exceeded Then the API returns 429 with a Retry-After header Given P95 performance targets When importing or exporting up to 100 policies Then responses complete within 2 seconds at P95
Security Controls & Device Trust
"As a security lead, I want device and user step-up checks during elevation so that elevated access only occurs on trusted devices and by verified users."
Description

Adds layered safeguards to elevation: biometric or PIN confirmation, device health checks, jailbreak/root detection, secure storage of short‑lived tokens, rate limiting, behavioral anomaly detection, and IP/geofence constraints. Supports step-up authentication (e.g., OTP or SSO re-auth) for sensitive scopes and immediate revocation on logout or device un‑enrollment. Ensures tokens are signed, non‑reusable, and expire aggressively to reduce risk.

Acceptance Criteria
Biometric/PIN Step-up Confirmation for Elevation
Given a caregiver initiates a JIT elevation on a supported device When the confirmation prompt is shown Then the user must successfully complete biometric authentication or enter a valid 6-digit app PIN before token issuance Given biometrics are unavailable or fail 3 times When the user attempts PIN fallback Then allow elevation upon correct PIN; after 5 failed PIN attempts within 10 minutes, lock elevation for 15 minutes and log SEC-ELV-LOCKOUT Given step-up confirmation succeeds When the elevation is granted Then record an audit event containing userId, deviceId, method (biometric|PIN), timestamp, requested scope, and clientId
Device Integrity and Enrollment Gate
Given a user requests elevation When device integrity attestation is performed (e.g., Play Integrity/App Attest) and MDM enrollment status is checked per policy Then block elevation if rooted/jailbroken, integrity=failed, or enrollment=non-compliant; return ELV_403_DEVICE_INVALID and write audit with reason Given a healthy attestation exists from prior check When elevation occurs within 10 minutes of that check Then reuse cached result; otherwise re-attest before proceeding Given elevation is blocked by integrity failure When the user views the result Then display remediation guidance and do not issue any token
Sensitive Scope Step-up Authentication
Given the requested elevation scope is marked sensitive (e.g., medication administration, PHI export, staff override) When the caregiver requests elevation Then require step-up authentication via either SSO re-auth (OIDC prompt=login) or 6-digit OTP before token issuance Given OTP step-up is selected When the user enters a code Then accept only valid codes within a 30-second step window, maximum 5 attempts; upon exceeding attempts, lock step-up for 10 minutes and log SEC-ELV-STEPUP-LOCKOUT Given SSO re-auth or OTP verification fails When the flow ends Then do not issue a token and log SEC-ELV-STEPUP-FAIL; upon success, proceed to token issuance
Token Security: Signed, Non‑Reusable, Aggressive Expiry
Given an elevation is approved When issuing a token Then issue a signed JWT using asymmetric keys (RS256 or EdDSA) with claims userId, deviceId, scope, clientId, aud, iat, jti, and exp <= 10 minutes from issuance Given any API call presents a previously seen jti When the server validates the token Then reject with 401 and log SEC-ELV-REPLAY; mark token as spent (one-time use) Given the app is backgrounded for more than 2 minutes or the device restarts When the app resumes Then purge in-memory token references and require re-elevation Given tokens are persisted When stored on device Then store only in the hardware-backed keystore/secure enclave with no plaintext storage
Elevation Abuse Protections: Rate Limiting and Lockouts
Given a user or device sends multiple elevation requests When thresholds are exceeded Then enforce limits: max 3 elevation attempts per user per minute and 10 per device per hour; return HTTP 429 with Retry-After and log SEC-ELV-RATE Given repeated PIN or OTP failures occur When failures reach 5 within 10 minutes Then lock elevation for that user on that device for 15 minutes and show an in-app banner with next-allowed-at timestamp Given a single source IP issues more than 50 elevation attempts within 5 minutes across users When detection triggers Then throttle that IP for 15 minutes and raise an alert to compliance/security
Risk-based Controls, IP Allowlist, and Geofence Enforcement
Given the agency configures IP allowlists or site geofences When an elevation request originates outside the allowed IP ranges or outside a 200 m radius of the scheduled visit location/time window Then block elevation or require approver per policy, and log SEC-ELV-GEOFENCE with coordinates and IP Given risk signals (e.g., time-of-day anomaly, new device, distance jump > 100 km in 1 hour, unusual scope) When the computed risk score >= 70/100 Then require step-up authentication; if risk score >= 90/100, block elevation and notify compliance via audit feed Given a risk decision is made When the audit record is written Then include risk factors, score, decision (allow/step-up/block), and evaluator version
Immediate Revocation on Logout or Device Un‑Enrollment
Given the user logs out or the device becomes unenrolled/non-compliant When the event is received by the backend Then revoke all active elevation tokens for that user-device within 5 seconds and send a push to purge local secure storage Given the device is offline at revocation time When it reconnects Then the app purges any cached tokens before making privileged requests, and the server rejects any late-presented tokens by jti blacklist with 401 Given revocation has occurred When the user attempts a privileged action Then the attempt fails within 5 seconds of the revocation event, and an audit event SEC-ELV-REVOCATION is recorded

Auto‑Expire Guard

Shift‑bound access tokens that end exactly at shift close, with idle‑timeout failsafes and local device timers that still revoke access if the phone goes offline. Prevents forgotten logins and after‑hours drift, while ensuring in‑progress notes are safely saved and resumable by authorized staff. Reduces PHI exposure without interrupting legitimate work.

Requirements

Shift-Bound Token Lifecycle
"As an operations manager, I want caregiver access to automatically end at scheduled shift close so that PHI is not accessible after hours."
Description

Issue and manage access tokens that are cryptographically bound to a caregiver’s scheduled shift window. Tokens are created at shift start and set to expire exactly at shift end using server-authoritative time, with immediate revocation at boundary crossing. Support split and overlapping shifts, mid-shift schedule updates, and real-time revocation when shifts are ended early or reassigned. Integrate with CarePulse Scheduling to derive start/stop times and react to changes via events. Enforce least-privilege scopes per visit/patient, and ensure consistent behavior across iOS, Android, and web clients with clock-skew tolerance.

Acceptance Criteria
Token Issuance at Server-Verified Shift Start
Given a caregiver has a scheduled shift [T_start, T_end] in CarePulse Scheduling and valid login credentials And client local time may drift by up to ±120 seconds from server time When the caregiver authenticates any time before T_start Then the server does not issue a shift-bound access token until server time >= T_start When the caregiver authenticates at or after server time T_start Then the server issues one access token bound to [T_start, T_end) with exp = T_end and nbf = T_start And the token includes scopes only for the caregiver’s active assignments at T_start And iOS, Android, and Web receive the token within 2 seconds of successful authentication
Exact Server-Time Expiry at Shift End
Given a valid shift-bound token with exp = T_end When server time reaches T_end Then any API call using the token returns 401 Unauthorized within 1 second And a revocation event is pushed to the client within 5 seconds And no protected data payload is returned after T_end
Offline Revocation at Shift End (No Network)
Given a valid shift-bound token with exp = T_end and the device goes offline before T_end And the client has cached T_end from the server When the client local time reaches T_end + 120 seconds (skew buffer) Then the client blocks access to protected views and prevents use of the token for queued requests And upon reconnect, the server rejects the token and requires re-authentication
Reactive to Schedule Changes (Early End, Extension, Reassignment)
Given a valid shift-bound token for [T_start, T_end_original] When CarePulse Scheduling emits an event that ends the shift early (T_end_new < T_end_original) Then the server revokes the token immediately, API calls return 401 within 1 second, and a revocation event reaches the client within 5 seconds When CarePulse Scheduling emits an event that unassigns the caregiver from the current shift/visit Then the server revokes the token immediately, API calls return 401 within 1 second, and a revocation event reaches the client within 5 seconds When CarePulse Scheduling emits an event that extends the shift (T_end_new > T_end_original) Then the server issues a refreshed token with exp = T_end_new within 5 seconds without forcing logout
Handling Split and Overlapping Shifts
Given a caregiver has two scheduled shifts S1 [S1_start, S1_end] and S2 [S2_start, S2_end] that touch or overlap When the current server time is within only one shift Then the issued token scopes include only assignments for the active shift and exp equals that shift’s end time When the current server time is within the overlap of S1 and S2 Then the token scopes are the union of active assignments from S1 and S2 And the token exp equals the earliest upcoming end among the active shifts And at that boundary, the token is refreshed within 5 seconds to reflect remaining active shift assignments When there is a gap between S1_end and S2_start Then the token is revoked at S1_end and no new token is issued earlier than S2_start
Least-Privilege Scopes per Visit/Patient
Given a caregiver is assigned to patients/visits {A, B} during the current active shift segment When the caregiver requests access to a resource outside {A, B} Then the server responds 403 Forbidden with no PHI in the response body And an audit log entry is recorded with caregiver ID, resource, and timestamp When assignments change to {A} during the shift Then access to B is denied within 5 seconds and a refreshed token reflects the reduced scope
Cross-Platform Consistency and Clock-Skew Tolerance
Given iOS, Android, and Web clients may have local clock skew of up to ±120 seconds When performing token issuance, refresh, expiry, and revocation across platforms Then server time is authoritative and no platform can access protected APIs after T_end; such calls receive 401 And revocation/refresh events are delivered to online clients within 5 seconds of the triggering condition And offline clients enforce local revocation no later than T_end + 120 seconds And automated tests for each platform pass the same scenarios with identical expected outcomes
Idle Timeout & Smart Activity Detection
"As a caregiver, I want the app to log me out after inactivity with a warning so that my patients’ data stays secure without losing my work."
Description

Implement configurable inactivity detection that monitors meaningful user activity (note editing, voice capture, route navigation) across foreground and background states. Display pre-timeout warnings and soft-lock the session on idle, auto-saving drafts before lock. Require quick re-authentication to resume within the same active shift without data loss. Provide role-based timeout policies, accessibility-friendly alerts, and safe handling of long-running actions (e.g., continuous audio recording) to prevent unintended lockouts.

Acceptance Criteria
Foreground Inactivity Warning and Lock
Given a logged-in user with policy idle_timeout=5 minutes and warning_lead=60 seconds When no meaningful activity (typing, tapping, scrolling, voice capture start/stop, navigation interaction) occurs for 4 minutes Then a pre-timeout warning modal appears with a visible countdown from 60 to 0 seconds and focus set to the “Stay Signed In” action Given the pre-timeout warning is visible When the user performs any meaningful activity before the countdown reaches 0 Then the countdown is dismissed and the idle timer resets to 0 seconds Given the pre-timeout warning is visible When the countdown reaches 0 without any meaningful activity Then the session is soft-locked immediately and the lock timestamp is recorded locally
Background/Offline Idle Timer Enforcement
Given the app is sent to background and the device may be offline with policy idle_timeout=5 minutes When no meaningful background activity occurs for 5 minutes Then the session is soft-locked locally and the lock is enforced immediately upon next foreground without requiring a server call Given the app remains in foreground while offline When the idle timeout elapses Then a soft-lock occurs and is enforced using the local device timer Given a soft-lock occurred while offline When network connectivity is restored Then the server session is synchronized within 5 seconds to reflect the local lock time and tokens are invalidated accordingly
Auto-Save on Soft-Lock During Note Editing
Given the user is actively editing a visit note or form When a pre-timeout warning appears or a soft-lock is triggered Then the current draft, including text and attachments metadata, is auto-saved within 2 seconds without data loss Given the draft was auto-saved due to idle soft-lock When the user re-authenticates within the same active shift Then the draft reopens to the exact prior screen with cursor position and unsent attachments intact Given an auto-save completes Then a visible “Saved” status with timestamp is displayed to the user
Quick Re-Authentication to Resume Within Shift
Given a session is soft-locked and the user’s shift is still active When the user re-authenticates via configured quick method (PIN or biometric) Then access is restored within 2 seconds to the exact prior screen state and the idle timer resets Given a session is soft-locked and the user’s shift has ended When the user attempts quick re-authentication Then access is denied and the user is prompted for full sign-in for the next shift per policy Given a successful quick re-authentication within an active shift Then no data loss occurs and any in-progress uploads resume automatically
Role-Based Timeout Policy Enforcement
Given role policies exist (e.g., RN idle_timeout=5m, Scheduler idle_timeout=10m, warning_lead=60s) When a user with a given role signs in Then the app enforces that role’s idle timeout and warning lead values and displays the correct countdown timing Given a user switches to a task context with a stricter role policy When the context change occurs Then the stricter timeout applies immediately and the UI reflects the new timeout in the next warning or countdown Given an admin updates a role’s timeout policy on the server When the device receives the policy sync Then new values take effect within 60 seconds without requiring the user to sign out
Accessibility-Compliant Pre-Timeout Alerts
Given device accessibility settings (screen reader on, dynamic type, high contrast, reduced motion) are enabled When the pre-timeout warning is displayed Then the alert is screen-reader accessible (proper labels and focus), respects dynamic type, avoids motion animations, and provides at least one secondary modality (haptic or tone) respecting system settings Given the pre-timeout warning is displayed When the user activates “Stay Signed In” via keyboard, switch control, or screen reader within the countdown Then the idle timer resets and the warning dismisses without requiring precise gestures Given a user has hearing or vision impairment preferences enabled When the pre-timeout warning triggers Then the alert uses two distinct modalities (visual plus haptic or audio) that can be configured in Settings
Safe Handling of Long-Running Actions (Audio Recording & Navigation)
Given continuous audio recording is in progress When idle timeout would otherwise elapse Then the recording continues uninterrupted, a non-blocking warning banner appears, and the session does not soft-lock until recording stops or max recording duration is reached Given turn-by-turn route navigation is active with periodic navigation events When idle timeout would otherwise elapse Then navigation events count as meaningful activity and prevent soft-lock as long as events occur at least every 60 seconds Given recording or navigation ends and no other activity occurs When the warning lead time completes Then the session soft-locks and all captured audio and navigation logs are saved successfully before locking
Offline Local Expiry Enforcement
"As a security officer, I want access to expire locally even if a device goes offline so that lost connectivity doesn’t extend PHI exposure."
Description

Enforce token expiry locally using a secure device timer and signed token claims so access is revoked even without network connectivity. Run a background watchdog that invalidates credentials, locks PHI views, and clears decrypted caches at the exact expiry time. Detect and mitigate clock tampering via monotonic time checks and rollback detection. On reconnection, reconcile any state changes with the server and record a tamper-evident audit entry. Ensure compliance with iOS/Android background execution limits and low-power modes.

Acceptance Criteria
Offline Shift End Exact Lockout
Given a valid session token with an exp claim and the device has no network connectivity When the device monotonic clock reaches the exp time Then the session is invalidated locally within 1 second of exp And PHI UI routes are blocked and show the lock screen And protected requests are not sent and any pending uploads are paused And in-progress notes are saved to encrypted local storage
Offline Idle Timeout Enforcement
Given a configured idle_timeout of N minutes and the device is offline When there is no user interaction for the idle_timeout duration Then the app locks within 1 second of timeout And decrypted PHI in memory is purged And reopening requires local auth (biometric/PIN) and a still-valid token And if the token expired during idle lock, access remains blocked until a new token is obtained
Background Watchdog in Low-Power Mode
Given the app is backgrounded and the OS is in Low Power Mode with background execution limits And a session token with an exp claim exists When exp is reached while still backgrounded Then a background watchdog task executes and invalidates the session within 3 seconds of exp And sensitive content is removed from notifications and widgets And on next foreground, the lock screen is shown before any PHI is rendered
Clock Rollback Tamper Detection
Given the device wall clock is adjusted backward by 2 minutes or more while a session is active When the monotonic elapsed time indicates a rollback relative to last trusted wall-clock Then the app immediately locks PHI access And a local tamper event is recorded with monotonic timestamp, delta, and device ID And the event persists across app restarts until uploaded
PHI Decrypted Cache Clearance on Lock
Given decrypted PHI exists in memory, caches, or temporary files during an active session When a lock is triggered due to expiry or idle timeout Then all decrypted in-memory buffers are zeroized And on-disk temporary files containing PHI are deleted or re-encrypted within 2 seconds And subsequent local storage inspection returns no plaintext PHI artifacts And reopening the app shows no PHI in recent views or thumbnails
Reconnection Reconciliation and Audit Log
Given a local expiry or tamper event occurred while offline When network connectivity is restored Then the client sends a signed state report including token claims, exp time, monotonic timestamps, and tamper flags within 5 seconds And the server persists a tamper-evident audit record linked to user, device, and shift_id And the client remains locked until a new valid token is issued by the server And any in-progress notes saved locally are listed as drafts and can be resumed after re-auth without data loss
Offline Token Signature and Claim Validation
Given a JWT access token containing exp and shift_id signed by the CarePulse issuer When the app starts or resumes while offline Then the token signature is validated against a pinned public key And tokens with invalid signatures or missing exp are rejected and no PHI is accessible And tokens with exp in the past are treated as expired and blocked And the exp claim cannot be extended locally; only a refreshed token from the server re-enables access
Safe Draft Save & Authorized Resume
"As a caregiver, I want my in-progress notes to be saved and transferred to authorized staff if my access expires so that patient care documentation is not lost."
Description

Continuously autosave in-progress notes, voice clips, and sensor-derived entries to encrypted local storage and sync to the server when available. On idle-lock or token expiry, preserve drafts without leaving them accessible to the expired user. Provide a handoff workflow that allows authorized staff (e.g., next-shift caregiver or supervisor) to resume and complete the documentation with full version history and attribution. Enforce PHI protections, prevent data loss, and maintain clear auditability of who authored and finalized each entry.

Acceptance Criteria
Continuous Autosave and Crash Recovery
Given autosave interval is configured to 10 seconds And caregiver U is composing a visit note with text, a voice clip, and sensor-derived entries And the device is offline When U makes changes for at least 1 second Then the app writes an encrypted autosave within 10 seconds of the last change And continues to autosave at most every 10 seconds while changes continue And if the app is force-closed or the device loses power mid-entry Then upon relaunch within 60 minutes and before token expiry, the latest draft (text, voice clip up to last buffered segment, sensor entries up to last received sample) is recoverable And no data loss exceeds the last 10-second autosave window
Encrypted Draft Storage and PHI Non-Exposure
Given a draft exists locally on the device Then the draft at rest is encrypted using a platform keystore–backed cipher (AES-256 or equivalent) And no plaintext draft fragments exist in app sandbox, caches, logs, or crash reports And the task switcher/recents snapshot and lock screen show no PHI (privacy-safe placeholder only) And notifications never include PHI content from drafts And copying draft content to the clipboard is disabled unless explicitly allowed by org policy and confirmed by the user
Idle Timeout Preserves Draft and Requires Re-Auth
Given org idle-timeout is set to 5 minutes And U is signed in and has an unsubmitted draft open When no interaction occurs for 5 minutes Then the app locks the session immediately And the draft is securely autosaved before lock And U cannot view or edit the draft until re-authenticating And upon successful re-auth within the active shift window, the draft is restored within 3 seconds
Shift-End Expiry During Active Entry (Offline)
Given U's shift ends at 12:00 local time And the device has no network connectivity And U is actively entering a note at 11:59:50 When 12:00:00 is reached by the device's secure local timer Then input is stopped and an access-expired screen is shown within 2 seconds And the current draft state (including voice recording gracefully stopped and saved up to the last buffered audio, and latest sensor entries) is autosaved And U cannot view or edit the draft after 12:00:00 without new authorization And no draft content is visible in the UI after expiry And when network returns, the draft syncs to the server under U's authorship and is marked "requires authorized resume"
Authorized Handoff and Resume With Version History
Given a draft marked "requires authorized resume" exists for client C And next-shift caregiver N or supervisor S has active permission for client C When N or S selects Resume and provides a required reason Then access is granted only if role-based and assignment checks pass And the resume event records actor, time, reason, and device ID And all subsequent edits are attributed to N/S with version diffs preserved in history And finalization requires signature/attestation and displays full author and finisher attribution
Reliable Sync, De-duplication, and Version History
Given multiple offline autosaves exist across two devices for the same draft When both devices come online Then the server de-duplicates identical versions and preserves divergent versions as branches with timestamps and authors And the most recent version is selected as current while others remain accessible in history And no content from any version is lost And sync completes within 30 seconds over a sustained 5 Mbps connection for a 10 MB draft with clips
End-to-End Auditability and Export
Given lifecycle events occur: autosave, idle-lock, token-expiry, resume, edit, finalize, unauthorized-access-attempt When an auditor requests the audit report for client C and visit V Then the system returns a tamper-evident log with UTC timestamp, actor ID, role, device ID, IP (if online), event type, and outcome And the report exports as PDF and CSV within 10 seconds And the audit report contains no PHI beyond necessary metadata and redacted titles
Admin Emergency Extend/Override
"As an operations manager, I want to grant a short, justified access extension so that caregivers can finish legitimate tasks without violating policy."
Description

Enable managers to issue tightly scoped, time-limited access extensions past shift end with mandatory justification codes, notes, and 2FA confirmation. Apply extensions at the minimum necessary scope (per patient, visit, or task) with configurable hard caps and automatic revocation at the new boundary. Notify affected users, log all actions immutably, and surface policy compliance checks. Provide immediate revoke capability for erroneous or abused extensions.

Acceptance Criteria
Manager issues scoped extension with 2FA and justification
Given an admin with Manage Extensions permission and a caregiver associated to a scope And the caregiver’s shift has ended or is within 15 minutes of end And the admin selects a scope (patient, visit, or task) And enters a justification code from the approved list and a free-text note of at least 15 characters And successfully completes 2FA within 60 seconds When the admin sets a duration within policy and clicks Issue Then the system creates an extension limited to the selected scope And sets the new expiry to the approved time And returns a success response within 2 seconds And displays the new expiry to both admin and caregiver And records the action in the audit log
Minimum necessary scope enforcement
Given a caregiver has an active task within a visit for a patient When an admin attempts to issue an extension broader than the narrowest applicable scope Then the system prompts to choose the narrower scope and explains why And the admin must explicitly confirm any broader-scope override with an additional reason And the override decision is logged with the extension When an extension is issued Then the token grants access only to resources within the chosen scope And API requests outside the scope are denied with 403 And the UI hides or disables access to out-of-scope records Given no linked task or visit exists for the caregiver When the admin selects patient scope Then the system allows issuance without an override prompt
Hard caps and compliance checks at issuance
Given org policy defines hard caps for duration by scope and per-user daily limits When the admin inputs a duration or count that exceeds any hard cap Then the Issue action is disabled and a clear validation message states which cap is violated When inputs meet policy Then compliance indicators show Pass for role, 2FA, scope, and caps And clicking Issue succeeds and sets expiry to now + approved duration (to the nearest second) And issuance is blocked if the admin lacks the required role or 2FA is not completed
Automatic revocation at new boundary and idle-timeout honored
Given an extension is active When current time reaches the extension expiry Then the caregiver’s access is revoked within ±5 seconds of the expiry time And any configured idle-timeout continues to apply unchanged during the extension And on offline devices, a local timer enforces the same expiry without network connectivity And any in-progress notes are auto-saved within 5 seconds of revocation and are resumable on next authorized login
Immediate revoke of an active extension
Given an extension is active When an admin selects Revoke now and confirms with 2FA Then online sessions tied to the extension are invalidated within 15 seconds And subsequent API calls with the revoked token return 401 or 403 And on next network reconnect, offline devices receive the revoke and lose access within 10 seconds And a revoke audit entry is created and linked to the original extension
Notifications without PHI
Given an extension is issued or revoked When the event is processed Then the affected caregiver and the issuing admin receive in-app notifications within 15 seconds And notifications include scope type, new expiry or revoke time, and justification code label, but no PHI And push/email alerts follow org notification settings and delivery status is logged
Immutable audit log and reporting
Given an extension is issued or revoked When viewing Audit Logs and Compliance Reports Then each entry includes actor, target user, scope type, resource identifiers, old/new expiry, justification code, free-text note, 2FA result, timestamps, compliance outcomes, and IP/device metadata And entries are append-only and tamper-evident; attempts to modify or delete are blocked and logged And reports are filterable by date range, admin, caregiver, scope, and justification, and exportable to CSV of up to 10,000 rows within 5 seconds
Audit Trail & Compliance Reporting
"As a compliance officer, I want a detailed access and expiry audit trail so that I can demonstrate adherence during audits."
Description

Capture comprehensive, immutable events for token issuance, refresh, idle warnings, locks, local and remote expiries, overrides, draft saves, handoffs, and re-authentications with timestamps, user/device identifiers, and optional location (per policy). Integrate with CarePulse’s reporting to deliver one-click, audit-ready exports filtered by date range, user, patient, or visit. Enforce retention, redaction, and access controls suitable for HIPAA-aligned auditing and incident response.

Acceptance Criteria
Immutable Event Capture for Token Lifecycle
Given a signed-in caregiver with an active shift-bound token When the system processes token events (issue, refresh, idle_warning, lock, local_expiry, remote_expiry, override, draft_save, handoff, re_auth) Then an append-only event is written within 200 ms of the trigger containing: event_type, timestamp_utc (ms), user_id, device_id, session_id, token_fingerprint (SHA-256), org_id, policy_version, and optional location fields when policy_enabled=true. And the event store enforces immutability with no updates/deletes and maintains a cryptographic hash chain (prev_hash, event_hash) for tamper evidence. And failures to write are retried up to 3 times with exponential backoff and produce an on-call alert on final failure.
Idle Timeout Warning and Lock Events Logged
Given an idle timeout policy of T minutes When user inactivity reaches T - 1 minutes Then an idle_warning event is recorded and a visible warning is displayed to the user. When inactivity reaches T minutes Then a lock event is recorded, access is blocked, and a draft_save event exists for any in-progress note within 1 second of lock. And upon successful re_auth, a re_auth event is recorded linking to the prior session_id and restoring drafts.
Offline Local Expiry with Deferred Sync
Given the device is offline at or beyond scheduled shift end When the local shift timer reaches end_time Then a local_expiry event is recorded to a durable local queue, the token is revoked locally, and API calls return 401 until re-auth. And when connectivity is restored Then queued events sync within 60 seconds, preserving original event timestamps and order; duplicate expiry records are de-duplicated server-side.
Audit Report Export with Filters and Redaction
Given a Compliance Admin opens Audit Reporting When they request an export filtered by date range, user(s), patient(s), and/or visit(s) Then the system returns downloadable CSV and JSON within 10 seconds for up to 100,000 events. And exported fields include: event_type, timestamp_utc, user_id, role, device_id, session_id, token_fingerprint, org_id, visit_id, patient_id (tokenized/redacted per policy), location (if enabled), prev_hash, event_hash, policy_version. And PHI fields (free-text notes, voice clips) are excluded; patient identifiers are redacted or tokenized per policy; the applied redaction policy_version is included in the export header.
Role-Based Access Controls for Audit Logs
Given RBAC policies are configured When a Compliance Admin or Security Analyst accesses audit logs Then full, unredacted event details are viewable; all access generates an access_log event. When an Operations Manager accesses audit logs Then only redacted fields are visible; attempts to access restricted fields are denied and logged as access_denied. And caregivers cannot access audit logs. And all audit log access requires MFA and just-in-time elevation with a session duration <= 60 minutes.
Retention Policy Enforcement and Legal Hold
Given an organization retention policy of 6 years (configurable) When events exceed the retention period and are not under legal hold Then a daily purge permanently deletes redactable PHI fields, retains integrity metadata where permitted, and records a purge event with counts. And when legal_hold=true for an org, user, patient, visit, or incident Then no purge occurs for matching events; holds record created_by, reason, scope, and expiry. And upon a patient right-to-delete action per policy Then patient_id is replaced by a non-reversible token, PHI references are removed, and a redaction event is recorded.
Incident Response Traceability and Handoff Timeline
Given an incident time window and optional filters (user, patient, visit) When a Security Analyst clicks Generate Incident Bundle Then the system produces a strictly time-ordered timeline with correlation_ids across sessions including issuance, refresh, idle_warning, lock, local_expiry, remote_expiry, override, draft_save, handoff (from_user, to_user), and re_auth events. And the bundle is exportable as a signed JSON with SHA-256 checksum and verification metadata, generated within 5 seconds for up to 5,000 related events.
Timezone & Shift Boundary Edge Handling
"As a distributed scheduler, I want access expiry to respect time zones and schedule changes so that caregivers aren’t cut off early or left with extended access."
Description

Compute precise expiry across time zones, DST transitions, cross-midnight shifts, and last-minute schedule changes using server-authoritative time. Handle caregivers traveling across time zones mid-shift and ensure countdowns and warnings remain accurate on-device with clock-skew tolerance. Automatically reissue or adjust tokens when schedules are edited during an active shift, and provide extensive automated tests for edge cases and regressions.

Acceptance Criteria
Cross‑Midnight Shift Expiry (Server‑Authoritative)
Given a shift scheduled 22:00–06:00 in the agency’s timezone and a caregiver authenticated using Auto‑Expire Guard And the device clock may be skewed by up to ±90 seconds When server time reaches 06:00:00 at the shift’s timezone Then the access token is invalidated server‑side at 06:00:00 with ≤1 second grace And the next API call receives 401/expired within ≤5 seconds of 06:00:00 And the device UI revokes access and displays a “Shift ended” message within ≤5 seconds of expiry And any in‑progress notes are autosaved to draft before revocation and are resumable by authorized staff on next login And the on‑device countdown never diverges from server time by more than 5 seconds
DST Fall Back Handling (25‑Hour Night)
Given a shift scheduled 00:00–07:00 on the night of DST fall back in the agency’s timezone And caregiver is logged in throughout the DST transition (clocks repeat 01:00–02:00) When server time passes through the repeated hour Then the token expiry is 07:00:00 local standard time (post‑fall‑back) computed by server And the on‑device countdown increases by exactly 1 hour at the transition without double‑revocation And no API call is accepted after 07:00:00 standard time; first post‑expiry call returns 401 within ≤5 seconds And audit logs record the DST transition and the single definitive expiry timestamp
DST Spring Forward Handling (23‑Hour Night)
Given a shift scheduled 23:00–07:00 spanning a DST spring forward in the agency’s timezone And caregiver is logged in across the skipped hour (clocks jump 02:00→03:00) When server time reaches 07:00:00 local daylight time Then the token expires exactly at 07:00:00 daylight time as computed by server And the on‑device countdown decreases by exactly 1 hour at the jump without late expiry And no API call is accepted after 07:00:00 daylight time; first post‑expiry call returns 401 within ≤5 seconds And audit logs contain the pre/post offset and final expiry timestamp
Mid‑Shift Timezone Change During Travel
Given a caregiver starts a shift in Timezone A and travels into Timezone B while the shift is active And the schedule’s authoritative timezone is the agency’s timezone When the device local timezone changes mid‑shift Then the token’s expiry remains bound to the schedule’s timezone per server, unchanged by device timezone And on‑device countdown adjusts to display remaining time without a jump >5 seconds in either direction And warning toasts at T‑15 min and T‑5 min still fire relative to server remaining time (±5 seconds) And no logout or token refresh is triggered solely by the device timezone change
Active Shift Schedule Edit Auto‑Adjusts Token
Given a caregiver is mid‑shift and an operations manager edits the shift end time in the scheduler When the end time is extended by 30 minutes and saved Then the backend updates the token expiry to the new end time within ≤10 seconds and emits an update event And the device receives the update via push within ≤30 seconds or on next API call, whichever is sooner, and updates countdown without interrupting work When the end time is shortened by 30 minutes and saved Then the device displays a prominent warning immediately and enforces the new earlier expiry; revocation occurs no sooner than 60 seconds after the device receives the change if less than 5 minutes remain, otherwise standard T‑5 min and T‑1 min warnings fire And in all cases, in‑progress notes remain saved as drafts with no data loss; audit logs include prior and new expiry with editor identity
Offline Device Idle Timeout and Local Revocation
Given idle timeout T is configured to 10 minutes and the device goes offline during an active shift And the app has a last known server time offset for clock‑skew correction When there is no user activity for T minutes per local timer adjusted by the last known offset Then the app locally locks and hides PHI, requiring re‑auth to continue And drafts are saved locally (encrypted at rest) before lock; no data loss occurs And upon reconnection, if the server token is expired the session remains locked; if the token is still valid and the user re‑authenticates, access is restored without data loss And idle timeout enforcement tolerance is ±30 seconds relative to configured T; no lock occurs if continuous input is detected
Automated Time Edge‑Case Regression Suite
Given the CI pipeline runs on each commit to the Auto‑Expire Guard module When the test suite executes Then it includes parameterized tests for: cross‑midnight expiry, DST fall back, DST spring forward, mid‑shift timezone change, clock skew of ±5 minutes, offline idle timeout, active schedule edits (extend/shorten), leap day (Feb 29), and UTC midnight boundaries And 100% of these tests pass And line/branch coverage for time computation and token lifecycle code is ≥90% And property‑based tests verify countdown monotonicity and expiry invariants across randomized offsets and transitions

BreakGlass Override

Emergency override for true edge cases with mandatory reason codes, short default durations, and instant notifications to supervisors. Access is narrowly scoped, watermark‑tagged, and heavily audited, then auto‑locks when the window ends. Enables fast client safety actions while keeping exceptional access transparent and accountable.

Requirements

Just-in-Time Privilege Elevation
"As a caregiver in the field, I want temporary, limited access to critical client information during emergencies so that I can make safe, timely decisions without exposing unrelated data."
Description

Implements narrowly scoped, temporary permission elevation that unlocks only the minimum data and actions required during an emergency (e.g., locked visit notes, client medication list, emergency contacts, route/geofence override). The override is bound to a specific client, session, and action set, integrates with existing CarePulse RBAC, and issues a time-limited permission token with explicit scope tags. Non-essential PHI remains masked. Works across mobile and web with consistent policy enforcement at the API gateway. Expected outcome: caregivers can act quickly while exposure is minimized and fully attributable.

Acceptance Criteria
Edit Locked Visit Note during Emergency
Given a caregiver is authenticated without edit permission for locked visit notes and is viewing client X's locked note on mobile When the caregiver triggers BreakGlass, selects client X, chooses action "edit_locked_visit_note", provides a required reason code, and confirms Then the system issues a time-limited token (default 10 minutes) with scope tags [client:X, actions:edit_locked_visit_note], bound to the current session and device And the locked note for client X becomes editable only during the active window And attempts to edit notes for any other client or to perform unscoped actions return 403 and are audit-logged And the UI displays a persistent "BreakGlass Active" watermark and countdown timer And on expiry or manual end, the note reverts to read-only within 5 seconds and the token is revoked
Medication List View with PHI Masking
Given BreakGlass is active for client X with scope [view_medication_list] When the caregiver opens the medication list Then the UI shows medication name, dosage, route, frequency, and last administered time only And non-essential PHI (e.g., insurance identifiers, SSN, unrelated documents) remains masked or omitted And API responses include only whitelisted fields for this scope and redact others And attempts to access full chart export, attachments, or unrelated PHI return 403 and are audit-logged
Emergency Route/Geofence Override
Given the caregiver is en route to client X and geofence validation is failing When BreakGlass is activated for client X with scope [route_geofence_override] and a reason code is provided Then arrival/onsite status for client X can be set/overridden once during the active window And ETA and route optimizations update within 15 seconds And no other client's route or geofence can be modified And the override auto-reverts when the window ends and is audit-logged with location snapshot and device ID
Scope-Bound Token to Client and Session
Given a BreakGlass token is issued Then the token contains explicit scope tags [tokenId, clientId, actionSet, issuedAt, expiresAt, sessionId, deviceId] And the token is valid only for the originating session and device And using the token after logout, app kill, or session timeout results in 401/403 within 5 seconds of the event And the token cannot be used for other clients or actions; such attempts return 403 and are audit-logged And the token introspection endpoint returns active=false within 5 seconds of revocation/expiry
Mandatory Reason Code and Default Duration
Given the caregiver attempts to activate BreakGlass When no reason code is provided Then activation is blocked with an inline error indicating "Reason code required" When a valid reason code is provided Then the default duration is set to 10 minutes (admin-configurable, max 30 minutes) And the caregiver may shorten but not exceed the max duration And a visible countdown timer is shown during the window And manual early termination immediately revokes access and ends the window
Immediate Supervisor Notification and Audit Trail
Given BreakGlass is activated for client X Then the assigned supervisor and on-call group receive notifications within 15 seconds via in-app and email, including [user, client, actionSet, reason, start, expiry] And every permitted and denied action during the window is audit-logged with [timestamp, userId, clientId, action, result, IP/device, tokenId] And audit records are immutable, timestamped to the second, and exportable as CSV within the reporting module And supervisor acknowledgement events (if any) are captured and linked to the tokenId
Cross-Platform Consistency via API Gateway
Given the same BreakGlass token is used on mobile and web for client X When the caregiver performs an in-scope action Then both clients succeed with identical server-side authorization outcomes When the caregiver performs an out-of-scope action Then both clients receive 403 responses within 200 ms, enforced at the API gateway And gateway-side rate limiting and anomaly detection remain active and unaffected by BreakGlass And behavior is consistent under offline-to-online transitions (queued requests are denied if the window has expired)
Mandatory Reason Codes & Context Capture
"As a caregiver, I want to supply a clear reason and context when I use BreakGlass so that supervisors understand why access was necessary."
Description

Requires users to select an admin-configured reason code and provide concise free-text context before activating BreakGlass. Automatically captures client ID, visit ID, user ID, timestamp, GPS location, device ID, and optionally attaches a short voice note or photo for richer context (including optional IoT sensor snapshot if available). The form is mobile-first, accessible, and cannot be bypassed. All captured context is linked to the event ID and stored immutably for audit and reporting.

Acceptance Criteria
Reason Code is Required Before Activation
Given the BreakGlass form is opened When the user attempts to activate without selecting a reason code Then the Activate button remains disabled and an inline error "Reason code is required" is shown And when the user selects a reason code from the admin-configured active list Then the error clears and the Activate button becomes enabled And codes not in the active list are not displayed
Free-Text Context Length and Validity Enforcement
Given a reason code is selected When the user enters context text Then the system enforces 10–300 non-whitespace characters and trims leading/trailing spaces And entries consisting only of whitespace are rejected And a live character counter displays remaining characters When the context is outside limits Then activation is blocked and a clear validation message is shown
Automatic Capture of Required Metadata on Activation
Given the user taps Activate Then the system captures and stores: eventId (UUIDv4), userId, clientId (if session-bound), visitId (if available), deviceId, appVersion, and timestamp (UTC ISO 8601 to milliseconds) And the system requests GPS and stores latitude, longitude, accuracy, and location_status (ok|denied|timeout|unavailable) And if GPS permission is denied or not resolved within 5 seconds, activation proceeds and location_status reflects the reason And the event is not persisted unless at minimum eventId, userId, and timestamp are recorded
Optional Voice Note, Photo, and IoT Snapshot Attachments
Given the form is displayed When the user chooses to add a voice note Then recording is limited to 30 seconds and <=1.5 MB, with playback and delete options before activation When the user chooses to add a photo Then one photo (<=2 MB) can be captured or selected, previewed, or removed before activation And on activation, if an IoT sensor snapshot is available within 3 seconds, it is attached automatically and flagged as sensor_snapshot And attachments upload securely with retry; failed uploads do not block activation, and each item records attachment_status (uploaded|retrying|failed)
Event Linkage and Immutable Storage for Audit
Given an activation completes Then all captured fields and attachments are linked to the eventId And the record is write-once; values are read-only in UI and API And any correction creates an append-only audit entry capturing who, when, before, after, and reason And audit exports include eventId linkage and a cryptographic hash of the record
Mobile-First and Accessible Context Form
Given a mobile device width between 320 and 428 px Then the form renders without horizontal scroll, primary controls meet >=44x44 dp touch targets, and first interactive paint occurs within 2 seconds on 3G Fast And all inputs have accessible names, logical focus order, error messaging announced via ARIA live regions, and minimum 4.5:1 contrast And dynamic text scaling up to 200% maintains layout without overlap or hidden content And the form is fully operable via keyboard and common screen readers (TalkBack/VoiceOver)
Bypass Prevention with Client and Server Enforcement
Given any client attempts to activate BreakGlass without a selected reason code and valid context Then the server rejects the request with HTTP 400 and error_code "MISSING_REQUIRED_CONTEXT", creates no event, and logs the attempt And client-side deep links or intents cannot invoke activation unless validations pass; the Activate action is disabled until all required fields are valid And tampered payloads (e.g., invalid reason code or empty context) are rejected server-side with HTTP 400 and error_code "INVALID_CONTEXT"
Timeboxed Override with Auto-Lock and Extension Workflow
"As a caregiver, I want the override to expire automatically after a short time so that access is limited to the emergency period."
Description

Applies a short, admin-configurable default duration (e.g., 15 minutes) with a visible countdown banner and automatic re-lock when the window ends. Supports a controlled extension flow that requires re-confirmation and, optionally, supervisor approval beyond a maximum threshold. Ensures data re-masking on expiry, revokes tokens across devices, and handles offline scenarios with a local timer and queued server reconciliation. Prevents backgrounded sessions from silently retaining elevated access.

Acceptance Criteria
Default Timebox with Countdown Banner
Given an admin-configured default override duration of 15 minutes When a user initiates a BreakGlass override successfully Then a persistent countdown banner displays the remaining time on all screens showing elevated data And the initial remaining time equals 15:00 ± 1 second And the countdown updates at least once per second And the banner remains visible until the override ends or is revoked
Auto-Lock and Re-Mask on Expiry
Given an active override with a visible countdown When the countdown reaches zero Then elevated access is revoked immediately And all previously unmasked sensitive fields re-mask within 2 seconds without user action And any API requests using the override token fail with an authorization error And the user must start a new override to regain elevated access And an audit event is recorded with the expiry timestamp
Cross-Device Token Revocation on Expiry
Given the same user has active override sessions on multiple devices or browsers When the override expires or is manually revoked Then all elevated-scope tokens are invalidated across devices within 5 seconds of server processing And subsequent requests from any session use non-elevated scopes And background synchronization tasks cannot fetch unmasked data after invalidation
Extension Within Threshold with Re-Confirmation
Given the default duration is 15 minutes and the max threshold is 30 minutes and supervisor approval is required only beyond the threshold And the user has 2 minutes remaining on an active override When the user requests an extension of 10 minutes Then the user is prompted to re-confirm the reason code and attest emergency necessity And the system grants the extension without supervisor approval because total time (25 minutes) is within the threshold And the countdown updates immediately to reflect the new remaining time And an audit entry records who extended, added minutes, timestamp, and reason code
Extension Beyond Threshold Requires Supervisor Approval
Given the default duration is 15 minutes and the max threshold is 30 minutes and supervisor approval is required beyond the threshold And the user has 1 minute remaining on an active override When the user requests an extension of 20 minutes Then the extension request enters a pending state and designated supervisors are notified instantly And no additional time is added until approval is received And if approval is received before expiry, the remaining time increases by the approved amount immediately And if approval is denied or approval arrives after expiry, elevated access auto-locks and the extension is not applied And all actions and outcomes are captured in the audit log
Offline Local Timer and Server Reconciliation
Given an active override and the device loses network connectivity When the client remains offline during the override period Then the local countdown continues using a monotonic clock And upon reaching zero locally, the client re-masks data and blocks elevated actions And all actions taken while offline are queued with override context And upon reconnect, the client reconciles with the server; the server records the override as expired and invalidates elevated tokens And audit entries for start, extension, and expiry are persisted once with correct ordering
No Elevated Access in Backgrounded Sessions Post-Expiry
Given the app session with an active override is backgrounded or the device is locked When the override expires while the app is not in the foreground Then on resume, sensitive data is masked before any screen is presented And any preloaded views refresh using non-elevated scope And background fetches using elevated scope stop at the moment of expiry And notifications do not include unmasked sensitive content
Instant Supervisor Notifications & Acknowledgment
"As a supervisor, I want instant alerts when BreakGlass is used so that I can monitor and support field staff and ensure compliance."
Description

Sends immediate notifications to assigned supervisors and on-call rotations via push, SMS, and/or email containing who triggered BreakGlass, client, scope, reason code, location, and duration. Includes secure deep links for real-time monitoring and a required acknowledgment workflow with configurable escalation if not acknowledged in time. Respects PHI minimality in transports and requires authentication to view details. Tracks delivery and read receipts for audit completeness.

Acceptance Criteria
Real-time Multi-Channel Notification Dispatch
Given a BreakGlass override is triggered by a caregiver for a client with at least one assigned supervisor and an on-call rotation configured When the override event is committed Then notifications are queued and dispatched to all targeted recipients via each of their enabled channels (push, SMS, email) within 5 seconds p95 and 15 seconds p99 And notifications are not sent for channels that the recipient has not verified or has disabled And each dispatch is recorded with eventId, recipientId, channel, queuedAt, sentAt, providerMessageId
Notification Payload Content and PHI Minimality
Given a notification payload is constructed for transport over push, SMS, or email When the payload is finalized Then the payload contains only: triggeringUserDisplayName, clientAlias (initials + last4 clientId), reasonCode, scopeLabel, approximateLocation (city + state), startTime, durationWindow, and secureDeepLink And the payload contains no clinical details (care plan, visit notes, diagnosis), full address, full client name, DOB, SSN, or free-text notes And if a disallowed field is present, sending is blocked, the event is logged as a PHI policy violation, and an internal alert is raised
Secure Deep Link Authentication and Session Handling
Given a recipient taps the secure deep link in a notification When the recipient is not authenticated or the session is expired Then the recipient is prompted to sign in (including MFA if enabled) and, upon success within 2 minutes, is redirected to the BreakGlass monitoring view for the specific event And if the recipient lacks authorization to view the event, access is denied with no PHI displayed and the attempt is audited And the deep link token is single-use and expires 15 minutes after issuance or immediately after first successful use, whichever comes first
Required Supervisor Acknowledgment Workflow
Given a notified supervisor opens the BreakGlass monitoring view When the supervisor selects Acknowledge Then the system records acknowledgment with eventId, supervisorId, timestamp, and channel used, and sets event status to Acknowledged And all notified recipients receive an immediate Acknowledged update within 5 seconds p95 And subsequent acknowledgment attempts show "Already acknowledged by {name} at {timestamp}" and do not change the recorded acknowledgment
Configurable Escalation on Non-Acknowledgment
Given an escalation policy with tiers and per-tier timeouts is configured for the team When no acknowledgment is recorded before a tier's timeout elapses Then the next tier's recipients are notified via their enabled channels, and the escalation step is logged with timestamps and recipients And escalation halts immediately upon first acknowledgment or when the BreakGlass window ends, whichever occurs first And if all tiers are exhausted without acknowledgment, a final fail-safe group is notified and the event is marked Needs Review
Delivery and Read Receipt Tracking for Audit
Given notifications are dispatched for a BreakGlass event When delivery receipts are returned by providers (APNs/FCM, SMS carrier, SMTP) Then delivery statuses (Sent, Delivered, Failed) are captured per recipient-channel within 60 seconds p95 And read/open events are captured via secure deep link opens (per recipient), email open tracking (if enabled), and SMS link clicks, and correlated to the event And an audit record can be viewed and exported showing, per recipient, channel, sentAt, deliveredAt, readAt, and acknowledgment details
On-Call Rotation Targeting and Failover
Given current time and the client’s service line and region are known When a BreakGlass override is triggered Then the active on-call supervisors for the relevant team and timezone are resolved and added as notification targets in addition to assigned supervisors And recipients are de-duplicated across roles and channels And if no active on-call is found or all contact methods are unverified, the default escalation group is targeted within 5 seconds and the condition is logged
Watermarked UI and Artifact Tagging
"As a compliance officer, I want artifacts created during BreakGlass to be clearly marked so that audits can quickly distinguish exceptional access."
Description

Displays persistent, high-contrast visual indicators during active BreakGlass (e.g., red banner and screen watermark with user and timestamp) across mobile and web. Automatically tags all affected artifacts—visit notes, voice transcripts, route changes, GPS logs, uploads—with the BreakGlass event ID and includes visible watermarks on generated PDFs and exports. Tags propagate through APIs and reports for consistent downstream visibility without degrading accessibility or usability.

Acceptance Criteria
Persistent On-Screen BreakGlass Indicators (Mobile & Web)
Given BreakGlass is active for the logged-in user And the user opens any screen in the CarePulse mobile app or web app When they navigate between screens, rotate the device, or background/foreground the app Then a non-dismissible red banner labeled "BreakGlass Active" and a diagonal page watermark display the user full name and current UTC timestamp And the timestamp refreshes at least every 60 seconds And indicators meet WCAG 2.1 AA contrast (≥ 4.5:1) and are announced by screen readers on screen entry And indicators do not overlap or block interactive controls or critical content at common breakpoints (xs–xl) And indicators are removed within 1 second after BreakGlass ends
Watermarked PDFs and Data Exports
Given a PDF, CSV, or printed report is generated for artifacts created/modified during a BreakGlass event or for the event itself When the export is produced Then every PDF page includes a 45° diagonal watermark with "BreakGlass Event {ID} | {User Name} | {Start–End UTC}" at 12–18% opacity And CSVs include the same text in a header row and file metadata And watermarks do not obscure underlying content (minimum 16 px padding from body content) And re-generated exports are byte-identical to prior runs for the same inputs except for file metadata timestamps
Comprehensive Artifact Tagging During BreakGlass
Given artifacts (visit notes, voice transcripts, route changes, GPS logs, file uploads) are created or modified while BreakGlass is active When the system persists these artifacts Then each record stores BreakGlassEventID, eventStartUtc, eventEndUtc, actorUserId, and reasonCode And tags are immutable after write (updates to artifact content do not alter tag fields) And artifacts created outside an active window have no BreakGlass tags And an audit log entry "breakglass.tag.applied" with artifactId, eventId, and timestamp is recorded
Tag Propagation in APIs, Reports, and Webhooks
Given clients retrieve tagged artifacts via REST/GraphQL APIs, reporting UI, or webhooks When responses are returned Then BreakGlass fields (eventId, startUtc, endUtc, actorUserId, reasonCode) are present and non-null for tagged records And reporting UI shows a "BreakGlass" column and filter that correctly filters to tagged/un-tagged records And downstream exports and report drill-throughs preserve BreakGlass fields And editing or copying a tagged artifact preserves its original BreakGlass tags
Offline Capture and Sync Integrity
Given a caregiver device is offline during an active BreakGlass window When the user creates or edits artifacts and the device later reconnects Then all artifacts created/edited within the window sync with the same BreakGlassEventID and UTC-normalized timestamps And no artifact created within the window is saved without a BreakGlass tag And conflict resolution retains the original eventId on the artifact version selected as final And UI indicators remain visible until the local event end time even while offline
Performance and Usability Non-Degradation
Given watermarking and tagging are active When users navigate, create artifacts, or generate exports Then p95 screen render time increases by < 100 ms vs. baseline without BreakGlass And PDF generation time increases by < 5% vs. baseline And API payload size increases by < 5 KB per tagged artifact And no more than one additional user action is introduced to complete core tasks And all affected screens remain operable via keyboard and screen reader with unchanged tab order and hit targets
Immutable Audit Logging & One-Click BreakGlass Report
"As an operations manager, I want a comprehensive, tamper-evident log and report of BreakGlass events so that we can pass audits and identify training needs."
Description

Captures an append-only, tamper-evident log (hash-chained) of every access and action taken under BreakGlass, including before/after states where applicable, actor, timestamps, client, device, IP, GPS, scope tags, and notification/ack events. Provides filters and exports (CSV/PDF) and a one-click, audit-ready narrative timeline report per event or date range. Supports retention policies, privacy controls, and a query API for BI tools to meet regulatory and customer audit needs.

Acceptance Criteria
Hash-Chained, Append-Only Log Integrity
Given a BreakGlass session is initiated or an action occurs under BreakGlass When a corresponding audit log entry is written Then the entry includes sequence_number, prev_hash, entry_hash (SHA-256) and event_id And any attempt to update or delete an existing entry is rejected and a tamper_attempt entry is appended with actor_id and timestamp And calling the Verify Log Integrity endpoint for the impacted date range returns status=valid and chain_length equals the number of entries And if any prior entry is altered in a test environment, Verify returns status=invalid and first_broken_sequence is reported
Complete Field Capture for BreakGlass Actions
Given an authorized user invokes BreakGlass on Client X, edits field Y from value A to value B on device D at location L When the action is saved Then the audit entry captures actor_id, actor_role, client_id, reason_code, session_id, action_type, action_ts (UTC ISO-8601 ms), device_id, ip_address, gps_lat, gps_lng, gps_accuracy, scope_tags, before_value=A, after_value=B And notification entries capture recipients, channels, dispatch_ts, delivery_status And acknowledgment entries capture ack_ts and ack_actor_id And unavailable fields are recorded as null with unavailable_reason And upon BreakGlass window expiry, an auto_lock entry is appended with lock_ts and scope_restored=true
One-Click Narrative Timeline Report (Per Event or Date Range)
Given a reviewer opens a BreakGlass event or selects a date range of BreakGlass activity When they click Generate Report Then an on-screen timeline and downloadable PDF are produced within 5 seconds for ≤500 entries (99th percentile) and within 30 seconds for ≤10k entries And the report includes header (tenant, report_id, generated_at, event_id or date_range, client_id), ordered entries with timestamp, actor, action, before/after where applicable, notifications, acknowledgments, auto-lock events, device/IP/GPS, and scope tags And the PDF and CSV are watermark-tagged with tenant and report_id, and digitally signed; the signature verifies via the public certificate endpoint And content respects role-based redaction rules and a redaction_notice is shown when applied And the report generation itself is logged with report_id and export_type
Filterable Audit Log and Accurate Exports
Given a reviewer applies filters by date_range, client_id, actor_id, reason_code, scope_tag, and action_type in the audit UI When they click Export CSV or Export PDF Then the exported file contains exactly the filtered records in the same sort order as displayed And all timestamps in exports are UTC ISO-8601 with millisecond precision and include a timezone column if converted in UI And column headers match the published data dictionary and include all required fields And filenames include tenant, export_type, and filter_summary; the export action is logged with export_id
Retention Policy Enforcement with Tamper-Evident Redaction
Given a tenant retention policy of N years and an active legal hold on Event E When the nightly retention job runs Then events under legal hold are excluded from redaction and the decision is logged with hold_id And events older than policy without legal hold have sensitive payload fields cryptographically redacted (keys destroyed) while minimal metadata (event_id, timestamps, hashes, tenant_id) remain And a redaction entry is appended for each affected record with policy_id and job_id And the Verify Log Integrity endpoint returns status=valid for the chain including redacted records
Privacy Controls and Role-Based Redaction
Given a supervisor without PHI permission views a BreakGlass report, export, or API response When the content is rendered or returned Then PHI fields (e.g., clinical notes, diagnosis, GPS precision > 2 decimals) are masked or generalized per policy, and a redaction_notice is displayed And an auditor with PHI permission sees full values for the same request And each redaction decision is logged with rule_id, actor_id, and timestamp; API and UI outputs are consistent
Query API for BI Tools
Given a BI client authenticates with OAuth2 scope audit.read When it requests /api/audit/events with filters (date_range, client_id, actor_id, reason_code, scope_tag) and pagination Then the API returns results within 2 seconds for ≤10,000 records (95th percentile) with server-side filtering applied And the response includes total_count, page_size, and next_page_token; field names and types match the OpenAPI schema And rate limits (120 req/min) and remaining quota are returned via standard headers; excess requests receive 429 with Retry-After And fields mirror those in CSV export for consistency
Admin Policy Configuration & MFA Confirmation
"As an admin, I want to define who can use BreakGlass, under what conditions, and with MFA so that emergency access is controlled and secure."
Description

Offers an admin console to configure BreakGlass policies: eligible roles, permissible scopes, default/maximum durations, required MFA methods, reason codes, notification channels, escalation rules, offline behavior, rate limits, cooldown between uses, and geofence override parameters. Enforces MFA at activation (e.g., push, OTP, or biometric) with fallbacks appropriate for field conditions. Supports environment-specific presets, versioned policy changes, and full audit of configuration edits.

Acceptance Criteria
Eligible Roles & Scoped Access Configuration
Given a policy lists eligible roles [Supervisor, RN] When a user with role Caregiver attempts to initiate BreakGlass Then the initiation is denied with 403 BG-ROLE-NOT-ELIGIBLE and an audit event is recorded Given scopes Patient(Read-only) and Location(Assigned Only) When a permitted user initiates BreakGlass Then access is limited to the configured scopes; out-of-scope requests are blocked with 403 BG-SCOPE-DENIED and logged Given the admin saves updated roles and scopes When the policy page is reloaded or GET /policies/breakglass is called Then the returned configuration exactly matches the saved values
Override Duration & Auto-Lock Enforcement
Given default duration is 10 minutes and max duration is 30 minutes When a user initiates BreakGlass without specifying duration Then the session auto-expires at 10 minutes ±5 seconds and locks access Given a user requests a 45-minute duration When initiating BreakGlass Then the request is rejected with 422 BG-DURATION-EXCEEDED and the UI displays the 30-minute maximum Given an active BreakGlass session When the configured maximum duration is reached Then the session is terminated, access is revoked within 5 seconds, and an audit + notification is emitted
MFA Methods & Fallbacks at Activation
Given MFA methods are configured as [Push, Biometric, OTP] with priority Push→Biometric→OTP When the user has network connectivity Then a push challenge is sent and activation succeeds only after approval within 60 seconds; otherwise it fails with 401 BG-MFA-TIMEOUT and is audited Given push fails or the device is offline and fallback is allowed When activating BreakGlass Then biometric is prompted; if unavailable, OTP (SMS or TOTP) is accepted; the selected method and outcome are logged Given MFA is required When no configured MFA method succeeds Then BreakGlass is not activated and the denial is logged with a correlation ID
Reason Codes & Notes Validation
Given admin-configured reason codes [Client Safety, Medication Access, Disaster Response, Other] with note required for Other When initiating BreakGlass Then the user must choose a reason code; selecting Other requires a 10–500 character note; otherwise activation is blocked with 422 BG-REASON-INVALID and audited Given a valid reason code (and note when required) When activation succeeds Then the reason code and note are immutable for the session and included in audit and notifications
Notifications & Escalation Rules
Given notification channels Email/SMS/In-app and an escalation rule “escalate if unacknowledged in 5 minutes” When BreakGlass is activated Then supervisors receive notifications on all configured channels within 30 seconds containing user, patient, location, scope, duration, reason, and correlation ID Given no supervisor acknowledges within 5 minutes When the escalation rule triggers Then the on-call administrator receives an escalated alert via all channels and the escalation is logged Given acknowledgments are recorded When the BreakGlass session ends (expiry or manual end) Then a closure notification is sent with outcome, total duration, and an audit link
Offline Behavior, Rate Limits, Cooldown, and Geofence Overrides
Given offline behavior is set to “Allow with offline OTP and local audit queue” When the device is offline during activation Then activation requires offline OTP and creates a queued audit record synchronized within 2 minutes of reconnection Given a rate limit of 2 activations per user per 24 hours and a cooldown of 30 minutes When a user exceeds the limit or attempts within cooldown Then activation is denied with 429 BG-RATE-LIMIT or 423 BG-COOLDOWN and is audited Given a geofence override radius of 500 meters When a user attempts activation outside the permitted geofence without override entitlement Then activation is denied with 412 BG-GEOFENCE and the location is logged; when override entitlement is present, activation is allowed and flagged
Environment Presets, Versioning, and Policy Change Audit
Given environment presets Dev, Staging, and Prod exist When the admin switches environment in the policy console Then policy values load from that environment’s preset and subsequent edits affect only that environment Given policy versioning is enabled When the admin saves changes Then a new immutable version is created with version ID, editor, timestamp, before/after diff, and reason for change; reads show the active version Given a regression is detected When the admin rolls back to a prior version Then that version becomes active without loss of history and a rollback audit event and notifications are issued

Redacted Reveal

Sensitive fields (addresses, SSNs, medications) are masked by default and revealed only with a press‑to‑peek that logs who saw what and why. Minimizes casual exposure during triage calls or crowded environments, while keeping critical details one tap away when clinically necessary. Boosts privacy without slowing down care.

Requirements

Default Sensitive Field Redaction
"As an operations manager, I want sensitive fields masked by default so that staff don’t accidentally expose PHI in busy environments."
Description

Mask predefined sensitive fields (e.g., patient addresses, SSNs, medication lists) across all CarePulse surfaces by default, including list views, detail screens, search results, notifications, exports, and voice-to-note transcripts. Implement consistent mask patterns and partial reveals (e.g., last 4 digits) where appropriate. Enforce server-side redaction to prevent over-the-wire exposure, with client rendering of masked placeholders. Provide a centrally managed, extensible catalog of sensitive fields and masking rules. Ensure negligible performance overhead and compatibility with mobile-first UX and web portal. Redaction state must persist per view and immediately reapply after navigation, inactivity, or app backgrounding.

Acceptance Criteria
Default Redaction Across All User Surfaces
Given predefined sensitive fields exist in the central catalog When a user views list views, detail screens, search results, notifications (push and in‑app), exports (CSV/PDF), and voice‑to‑note transcripts Then those fields render as masked placeholders per catalog rules and no unmasked characters are visible And no sensitive values appear in truncated previews, badges, or lock‑screen notification banners And exported files contain only masked representations for those fields by default And UI automation tests validate masking across each surface for SSN, address, phone, DOB, and medication list
Server‑Side Redaction Over‑the‑Wire Enforcement
Given any API, webhook, push, streaming, or export endpoint When responding with records containing catalog‑defined sensitive fields Then the payload values for those fields are server‑side masked or omitted with a redact metadata flag And raw sensitive strings do not appear in network payloads, logs, analytics, or crash reports And attempts to bypass via query params (e.g., fields, include, expand) still return masked values And contract tests verify masked values for SSN, address, and medication_list across REST and GraphQL endpoints
Consistent Mask Patterns and Partial Reveals
Given catalog rules define patterns and partial reveals for each sensitive field When SSN is rendered Then it displays as XXX‑XX‑1234 (only last 4 visible) When phone number is rendered Then it displays as (XXX) XXX‑1234 (only last 4 visible) When street address is rendered Then it displays as [Street Hidden], City, State, ZIP shown as ###** (last 2 masked) When medication list is rendered Then only the count is shown (e.g., “3 medications”) with names masked And the same patterns appear identically across mobile and web, lists and details And unit tests verify 100% pattern match for all fields in the catalog
Redaction State Persistence and Auto‑Reapply
Given a view containing sensitive fields When the view first renders, or the user navigates to it, returns to it from another screen, the app resumes from background, or the user is inactive for 30 seconds Then all sensitive fields are immediately masked per catalog rules And any previously unmasked state within that view is discarded And mask reapplication occurs without visible jank (no more than 1 dropped frame) and without changing scroll position
Central Catalog Distribution and Extensibility
Given an admin updates the central redaction catalog (add/edit/remove field or pattern) When the change is published Then web clients apply the new rules within 1 minute and mobile clients within 5 minutes, without app redeploy And offline mobile clients use the last cached rules and apply updates on next reconnect And adding a new field key (e.g., emergency_contact_phone) results in masked rendering across all surfaces without code changes And catalog versions are logged and reported in client diagnostics
Performance Overhead Boundaries
Given redaction is enabled globally When loading patient list and patient detail screens on mid‑range devices (e.g., iPhone SE 2022, Pixel 6a) Then p50 render time increases by ≤ 50 ms and p95 by ≤ 150 ms versus baseline without redaction And API p95 latency increases by ≤ 20 ms on endpoints with masked fields And scroll performance remains ≥ 55 FPS during data‑bound list scrolls And app bundle size increase attributable to redaction is ≤ 50 KB
Search, Autocomplete, and Highlighting Respect Redaction
Given global search or in‑view search is used When results, suggestions, or highlights are displayed Then any sensitive field values are masked per catalog and never shown in snippets, suggestions, or highlights And entering an exact sensitive value does not echo the value back in UI; only masked representations are shown And search indexing and analytics pipelines do not store raw sensitive tokens; tests confirm only masked or hashed tokens exist
Press-to-Peek Reveal Interaction
"As a caregiver, I want to briefly reveal a patient’s address with a press-to-peek so that I can confirm directions without leaving sensitive data visible."
Description

Provide an accessible press-and-hold/tap-to-peek interaction that temporarily reveals a masked field for a policy-defined duration (e.g., 2–10 seconds), then auto-remasks. Support both touch and keyboard interactions, with clear visual affordances, optional haptic feedback, and immediate re-mask on release, navigation, or app backgrounding. Prevent text selection/copy while revealed and block long-lived persistence (e.g., no caching in screenshots or app switcher). Allow consecutive reveals of multiple fields with independent timers and a manual “hide now” action.

Acceptance Criteria
Touch Press-and-Hold Reveal with Auto-Remask
Given a masked sensitive field on a touch-capable device, when the user presses and holds the reveal affordance, then the field reveals within 150 ms and remains visible only while the touch is held, not exceeding the configured duration (2–10 s). Given a reveal is active via press-and-hold, when the user lifts their finger, then the field re-masks within 100 ms. Given the user continues holding beyond the configured duration, when the timer elapses, then the field auto-remasks and provides a brief visual cue and optional haptic tick if enabled.
Tap-to-Peek Timed Reveal and Manual Hide
Given a masked sensitive field with tap-to-peek enabled, when the user taps the reveal affordance, then the field reveals and a visible countdown appears for the configured duration (2–10 s). Given a timed reveal is active, when the duration expires, then the field auto-remasks and the countdown disappears. Given a timed reveal is active, when the user invokes Hide Now, presses Esc, or taps outside the field, then the field re-masks immediately and the timer stops. Given multiple sensitive fields are present, when the user reveals more than one field via tap, then each field runs an independent timer and Hide Now affects only the current field.
Keyboard and Screen Reader Accessible Reveal
Given keyboard navigation, when focus is on a masked field and the user presses and holds Space or Enter, then the field reveals for the duration of keydown (capped at the configured duration) and re-masks on keyup. Given keyboard navigation, when the user presses Enter quickly, then a timed reveal starts and can be canceled with Esc to re-mask immediately. Given a screen reader is active, when focus moves to a masked field, then the control announces that the field is sensitive and masked with instruction to reveal for N seconds; upon reveal, it announces that content is revealed with X seconds remaining via a polite live region; the sensitive text is only voiced while revealed. Given a reveal is active, when focus moves away to another control, then the field re-masks immediately.
Immediate Remask on Release or Context Change
Given any reveal is active, when the app is backgrounded, the app switcher is opened, the device is locked, or a route/screen navigation begins, then all revealed fields re-mask before the transition and no revealed content appears in previews or transition frames. Given any reveal is active, when a modal, sheet, or system dialog opens or when the device rotates, then all revealed fields re-mask immediately. Given a reveal is active via press-and-hold, when the touch ends due to gesture recognition (e.g., scroll begins), then the field re-masks immediately.
No Selection, Copy, or Share While Revealed
Given a field is revealed, when the user attempts text selection via tap-hold/drag or keyboard selection, then no selection handles appear and no selection is created. Given a field is revealed, when the user attempts to copy, share, or invoke a context menu, then the menu is suppressed for that field and the system clipboard remains unchanged. Given a field is revealed, when accessibility features that copy or read selected text are invoked, then the sensitive text is not selectable or copyable and is not exposed to the clipboard or share targets.
No Persistence in Screenshots or App Switcher
Given any reveal is active, when the user captures a screenshot, then the captured image shows the field masked (no sensitive text visible). Given any reveal is active, when the user opens the app switcher or multitasking thumbnails, then only masked content is visible in thumbnails and previews. Given a reveal has ended, when recent screenshots, photo previews, and app switcher thumbnails are reviewed, then no sensitive text from the reveal is present.
Visual Affordances, Countdown, and Haptic Settings
Given a masked field is displayed, when the user views or focuses it, then a consistent reveal affordance (e.g., eye/peek icon with tooltip/hint) is visible and the tap target size is at least 44×44 pt. Given a reveal starts, when a policy duration is configured, then a countdown indicator (numeric seconds or radial) is displayed and updates at 1 s intervals until re-mask. Given haptic feedback is enabled in app settings and supported by the device, when a reveal starts and ends, then a short haptic pulse occurs; when disabled or unsupported, no haptic occurs.
Access Justification Prompt & Break-Glass
"As a clinician, I want to select a justification when peeking at medications so that my access is documented for compliance."
Description

Capture a justification for reveals via a lightweight prompt using admin-configured quick reasons plus optional free text. Support configurable frequency (every reveal, first reveal per record, or once per session/context) and enforce minimum reason requirements per policy. Include an emergency “break-glass” flow with explicit confirmation and stronger logging. Localize reason lists, prefetch for low latency, and validate input offline with later sync. Associate each justification with its corresponding reveal events for compliance traceability.

Acceptance Criteria
Quick Reason Selection with Optional Free Text
Given a user attempts to reveal a masked sensitive field and the policy requires at least one reason When the justification prompt appears Then admin-configured quick reasons are displayed and free-text entry is available (max 500 characters) And the Confirm action remains disabled until the minimum reason requirement is met per policy And on Confirm the reveal proceeds without extra taps And an audit entry is created containing userId, patientId, recordId, fieldType, timestamp, selected reason codes, freeText (if any), policyVersion, locale, sessionId, justificationId, and revealEventId
Frequency: Every Reveal
Given frequency mode is configured as Every Reveal And the user initiates multiple reveal actions across one or more records in a session When each reveal is initiated Then the justification prompt is shown for each reveal And each reveal generates a distinct justificationId and audit entry linked to its revealEventId And p95 time from tap to prompt display is <= 200 ms using prefetched data
Frequency: First Reveal per Record
Given frequency mode is configured as First Reveal per Record with a policy-defined recurrence window And the user is viewing Record A within that window When the user performs the first reveal in Record A Then the justification prompt is shown and a justification is captured And subsequent reveals of masked fields in Record A during the window do not show the prompt And those reveal events are linked to the initial justificationId in the audit log
Frequency: Once per Session/Context
Given frequency mode is configured as Once per Session/Context with a defined context identifier (e.g., sessionId or careContextId) When the first reveal occurs within the current context Then the justification prompt is shown and a justification is captured And subsequent reveals within the same context do not show the prompt And all reveal events in the context link to the same justificationId And starting a new context requires a new justification on first reveal
Emergency Break-Glass Flow
Given the justification prompt is shown and an emergent need exists When the user selects Break Glass Then a two-step confirmation with explicit warning text and an acknowledgement checkbox is required And a free-text justification is mandatory with a configurable minimum length (e.g., >= 20 characters) And the reveal proceeds regardless of frequency mode And the audit entry is flagged breakGlass=true and includes confirmationAcknowledged=true, freeText, selected reason codes (if any), deviceId, sessionId, and policyVersion And break-glass events are distinctly tagged for compliance reporting and are never deduplicated with standard justifications
Localization and Fallback of Reason Lists
Given the user’s active locale is determined When the justification prompt is displayed Then quick reason labels and helper text render in the user’s locale And if a reason lacks a translation, a default locale label is shown without blocking the reveal And the audit entry stores both selected reason codes and the display locale used
Offline Capture, Prefetch, and Deferred Sync
Given the device is offline and localized reason lists have been prefetched and cached When the user attempts to reveal a masked field Then the prompt opens using cached data with p95 display time <= 200 ms And client-side validation enforces minimum reasons and any required free text (including break-glass requirements) And on Confirm the reveal proceeds and an audit record with justificationId and revealEventId is persisted locally for later sync And upon reconnection the record syncs within 30 seconds with idempotent server-side upsert keyed by justificationId; failures are retried and surfaced to the user
Tamper-Evident Audit Logging & Reporting
"As a compliance officer, I want detailed, tamper-evident logs of who viewed what and why so that I can produce audit-ready reports on demand."
Description

Record each reveal event with immutable, append-only logs capturing user, role, org, patient/record IDs, field name, timestamp, duration of visibility, geolocation (if permitted), device info, IP, session ID, call state (e.g., triage), active policy version, and the justification provided. Chain log entries with hashes for tamper evidence and apply time-based retention. Expose filters and one-click export that integrates with CarePulse’s existing audit-ready compliance reports (CSV, NDJSON, and API). Provide anomaly alerts (e.g., excessive reveals per user or outside shift) and redact logs for tenant-to-tenant isolation.

Acceptance Criteria
Log Entry Completeness on Press-to-Peek Reveal
Given an authenticated user (userId=u1, role=Caregiver) in org org1 viewing patient p1 record r1 with geolocation permission enabled and an active triage call When the user presses-to-peek the masked field "home_address" for 4.2 seconds from device d1 (model/os/appVersion), IP 203.0.113.10, session s1, under policy version pv-12 and provides justification "Dispatch verification" Then an append-only audit log entry is created and available via the audit API/UI within 5 seconds And the entry contains: userId=u1, role=Caregiver, orgId=org1, patientId=p1, recordId=r1, fieldName=home_address, timestamp t within 200ms of press start, visibilityDuration 4.2s ±200ms, geolocation (lat,long,accuracy), deviceInfo (model, os, appVersion), ipAddress=203.0.113.10, sessionId=s1, callState=triage, policyVersion=pv-12, justification="Dispatch verification", hash, previousHash And when geolocation permission is disabled for the session, the geolocation field is null and reason="not_permitted" is recorded
Hash-Chained Append-Only Immutability Verification
Given a tenant t1 with an existing sequence of N>=3 reveal log entries E1..EN stored When the system computes hashChainVerify(t1) Then for every i in 2..N, EN[i].previousHash equals hash(EN[i-1]) and the verification result status is "valid" And attempts to update or delete any existing log entry via API are rejected with 405/403 and no stored values are changed And only new entries can be added via append; new append updates the terminal hash without altering existing entries And if any byte of EN[k] is altered in storage (test harness), hashChainVerify(t1) returns "invalid" and reports the first broken index k
Time-Based Retention Enforcement per Tenant
Given tenant t1 has a configured retention window of 365 days And a log entry L created at time t0 that is now older than 365 days When the retention job runs Then L is no longer retrievable via UI or API within 24 hours of crossing the retention threshold And exports and counts exclude L while newer entries remain accessible And new log entries after a policy change to 180 days retain their own policyVersion while retention applies prospectively from the effective date per tenant
Filtered Search and One-Click Export to CSV/NDJSON/API
Given reveal logs exist across multiple users, roles, patients, fields, call states, policy versions, IPs, and devices in tenant t1 When a compliance officer applies filters: dateRange=[2025-08-01,2025-08-31], userId=u7, role=Caregiver, patientId=p9, fieldName=medications, callState=triage, policyVersion=pv-12 Then the results set contains only entries that match all filters and is sortable by timestamp And clicking "Export" produces artifacts within 30 seconds containing exactly the filtered rows in both CSV and NDJSON formats with correct column/field headers And the Audit API endpoint returns an NDJSON stream for the same filters with Content-Type application/x-ndjson and stable ordering And the export can be attached to an existing compliance report with a unique artifact ID and is downloadable by authorized users in tenant t1
Anomaly Alerts for Excessive or Out-of-Shift Reveals
Given anomaly rules are configured for tenant t1: ruleA=excessive_reveals_per_user threshold=20 in 60 minutes, ruleB=outside_shift And user u3 performs 22 reveal events within 60 minutes, with the last 2 outside their scheduled shift When the anomaly detector runs Then two alert records are created: one for ruleA and one for ruleB, each containing userId, orgId, ruleId, count, timeframe, and first/last event timestamps And the alerts are visible in the audit alerts UI and retrievable via the Alerts API within 60 seconds of rule breach And a notification event is emitted to configured channels (e.g., webhook) at least once and includes a linkable reference to the underlying reveal logs And subsequent events within the next 30 minutes do not create duplicate alerts for the same rule/user (deduplicated), but update the alert's counters
Tenant Isolation and Redaction in Audit Logs
Given two tenants tA and tB with users a1 (tA) and b1 (tB) and reveal logs present in both When user a1 queries the audit logs with no tenant parameter Then only logs with tenantId=tA are returned and any cross-tenant identifiers are omitted or redacted And when user a1 requests a specific entry belonging to tB by id, the system returns 404/403 and does not reveal whether the id exists And exporting from tA produces artifacts that contain only tA data; cross-tenant leakage is zero rows, verified by attempting to match patientIds/orgIds from tB And the API and UI consistently include tenantId in responses for platform admins, and filtering by tenantId is required for cross-tenant roles
Accurate Visibility Duration and Multi-Reveal Handling
Given a user u5 reveals the same field "medications" on record r3 three times in one session with measured hold durations 1.2s, 0.4s, and 5.3s, and backgrounds the app during the third reveal When the events complete Then three distinct log entries exist, each with the correct start timestamp and visibilityDuration within ±200ms of the measured holds And the third entry's visibilityDuration stops at the time the app is backgrounded and the field is re-masked And releasing and re-pressing creates a new entry; durations are not aggregated across entries And sessionId is the same across all three entries; a new sessionId is used if the app restarts
Role- and Context-Based Reveal Policy Engine
"As an admin, I want to restrict SSN reveals to office staff and require re-auth for field staff so that access aligns with least-privilege principles."
Description

Implement an admin-managed policy engine defining which roles can reveal which fields under which contexts (e.g., during active visit, on triage call, off-shift, geofenced office vs. public). Policies can enforce re-auth, maximum reveal duration, justification requirements, rate limiting, and denial conditions. Evaluate policies client-side for UX responsiveness with server-side authority and real-time updates. Provide versioning, rollback, and default policy templates aligned with least-privilege. Include a test sandbox to simulate policy outcomes before deployment.

Acceptance Criteria
Active Visit: Nurse Medication Peek with Re-Auth and Auto-Remask
Given a user with role RN is assigned to the active visit of the patient and the medication field is masked And the policy requires biometric re-auth within the last 5 minutes and sets max reveal duration to 30 seconds When the user press-to-peeks the Medications field Then if re-auth is valid, the field reveals within 200 ms and a 30-second countdown is shown And when the countdown ends or the touch is released, the field auto-remasks within 200 ms And an audit log record is created server-side within 2 seconds including userId, role, patientId, field, context, timestamp, policyVersion, and revealDuration And if re-auth is expired, the user is prompted to re-authenticate and no data is revealed until success; failed attempts are logged as denied without exposure
Off-Shift: Deny SSN Reveal with Explicit Audit
Given a user is off-shift per schedule and attempts to reveal SSN When press-to-peek is initiated Then the request is denied with error POLICY_DENIED_OFF_SHIFT and no SSN data is transmitted to the client And the UI remains masked and displays a non-intrusive denial banner within 500 ms And an audit log entry is recorded within 2 seconds with outcome=Denied, reason=OffShift, zeroBytesExposed=true
Triage Call: Justification-Required Peek with Rate Limiting
Given the user is on a triage call context and attempts to reveal Medications And the policy requires a justification reason code and free-text justification of at least 10 characters And rate limiting is set to 3 reveals per 10 minutes per user per patient per field When the user provides a valid reason code and >=10 character justification and passes re-auth if idle >10 minutes Then the reveal proceeds and is auto-remasked per policy duration And if justification is missing or too short, the Reveal action remains disabled and no data is shown And if the rate limit is exceeded, the reveal is denied with error TOO_MANY_REVEALS and blocked for the remainder of the window And audit logs include reasonCode, justificationText (hashed), and rateLimitWindowId
Geofenced Office vs Public: Context-Specific Address Reveal Rules
Given device location is within the configured office geofence and connected to an approved SSID And the user role is Scheduler and the policy allows address reveal without re-auth in-office with max duration 60 seconds When the user press-to-peeks the Address field Then the Address reveals within 200 ms for up to 60 seconds and auto-remasks thereafter And when outside the geofence or on an unapproved network, the same action requires re-auth and reduces max duration to 15 seconds And when device posture is insecure (e.g., jailbreak detected), the reveal is denied with error INSECURE_DEVICE And all decisions record the evaluated context signals (geoState, networkId, devicePosture) in the audit log
Client-Side Evaluation with Server Authority and Real-Time Sync
Given policy version X is cached locally and subscribed to server updates When a reveal action is initiated Then the allow/deny decision is computed client-side within 100 ms using version X and applied immediately And server authoritative verification is sent asynchronously; if a conflict is returned, the field re-masks within 1 second and further reveals are blocked until resync And when an admin publishes version X+1, clients receive and apply the update within 10 seconds (push) or next 30-second poll, whichever is sooner And the active policy version and ETag are displayed in diagnostics and included in audit records
Policy Templates, Versioning, and Rollback Management
Given an admin opens Policy Manager When the admin applies the Least-Privilege template and publishes Then a new policy version is created with an immutable versionId and timestamp, and becomes active immediately And the system retains at least the last 50 versions with diffs And when the admin performs a rollback to a prior version, the exact prior rules are restored and propagated to clients within 10 seconds And all publish and rollback actions are audit-logged with adminId, rationale, previousVersionId, newVersionId
Policy Sandbox: Simulate Decisions Pre-Deployment
Given a draft policy version is created in Sandbox mode When an admin inputs role, field, context signals (visit state, shift status, geofence, network, device posture), and optional justification Then the system returns a deterministic decision (Allow, Deny, RequireReAuth) with constraints (maxDurationSec, reAuthWindowMin, rateLimit) and a human-readable evaluation trace And no production audit logs or data access occur during simulation And the admin can export the simulation results and trace as JSON and share a permalink valid for at least 7 days
Screenshot/Recording Guard & Secure Display
"As a caregiver, I want the app to prevent or mitigate screenshots when sensitive data is visible so that I don’t accidentally capture PHI."
Description

Activate platform-level secure display protections during revealed states: set FLAG_SECURE on Android, use UISecure in iOS, and blur content in app switchers. On web, provide watermarks and visibility-change detection to auto-remask and warn users; degrade gracefully if browser restrictions apply. Obfuscate sensitive values in push notifications and widgets. Detect and log screenshot/recording attempts where detectable, and ensure WCAG-compliant contrast and screen-reader labels that avoid speaking masked content unless explicitly revealed.

Acceptance Criteria
Android Secure Display During Reveal
Given an Android device and a masked sensitive field When the user presses-and-holds to reveal Then the hosting window sets FLAG_SECURE within 100ms and remains set while any sensitive field is revealed And the app switcher preview shows a blank/blurred thumbnail while FLAG_SECURE is set And attempts to screenshot or screen-record produce a black/blank result And when all sensitive fields are remasked or the view is closed, FLAG_SECURE is cleared within 300ms.
iOS Secure Display and App Switcher Obfuscation
Given an iOS device and a revealed sensitive field When the app resigns active or the app switcher is shown Then an obscuring overlay hides sensitive content from system snapshots And no sensitive pixels or text appear in app switcher thumbnails When screen capture/mirroring is detected (isCaptured == true) Then sensitive content auto-remasks within 200ms and a warning banner is displayed And when capture stops, content remains masked until the user explicitly re-reveals.
Web Auto-Remask on Visibility Change and Warning
Given a web client with a revealed sensitive field When document.visibilityState changes to hidden or the tab/window loses focus Then the field auto-remasks within 200ms And on return to the page, a non-blocking warning toast informs the user that content was remasked and requires explicit re-reveal And the value is not present in the DOM, accessibility tree, or clipboard history while masked When the Page Visibility API is unavailable Then display a persistent "Secure Display Limited" banner upon reveal and start a 30-second auto-remask timer.
Web Watermarking of Sensitive Views
Given a revealed sensitive field on web When the content is visible Then a diagonal repeating watermark overlay includes user identifier, organization, timestamp, and partial IP with <=10% opacity across the viewport and printed pages And the watermark updates every 60 seconds and on resize/orientation change And the watermark is aria-hidden and non-selectable, and does not interfere with interaction When CSS features for watermarking are unsupported Then fall back to a text-only overlay banner indicating the user and timestamp.
Push Notifications and Widgets Obfuscation
Given a notification template that could include a sensitive value When a push notification is sent Then the title/body contain masked placeholders (e.g., ••••) and omit raw sensitive values and detailed PHI, using minimum necessary context And notification expand actions and quick replies do not reveal raw values Given a platform widget surface containing sensitive fields When rendered Then fields are masked by default and cannot be revealed in-widget; tapping opens the app to the corresponding screen after auth And audit logs record notification/widget delivery without storing raw sensitive values.
Screenshot/Recording Detection and Audit Logging
Given a platform event indicating potential capture When iOS isCaptured becomes true while sensitive content is revealed Then auto-remask within 200ms and emit an audit event CAPTURE_DETECTED with userId, timestamp, device, patientId/recordId, field identifiers, and actionTaken=auto_remask When a web beforeprint event fires while sensitive content is revealed Then remask before printing, overlay a "Sensitive data masked" banner on print, and emit PRINT_ATTEMPT in the audit log When Android secure flag is active during reveal Then emit SECURE_FLAG_ACTIVE context in the audit trail for the session And all audit events are queryable in the admin report within 1 minute and contain no raw sensitive values.
Accessibility: WCAG Contrast and Screen Reader Behavior
Given masked sensitive content When accessed via a screen reader Then the accessible name/description indicates "Sensitive value masked. Press and hold to reveal." and the underlying value is excluded from the accessibility tree When the user explicitly triggers reveal via an accessible action Then the value is spoken only once with aria-live=polite and never read automatically on focus And auto-remask announcements use an aria-live region and do not shift focus And masking badges, warnings, and controls meet WCAG 2.2 AA contrast (text >= 4.5:1, icons >= 3:1) and are fully keyboard operable (including Esc to cancel reveal) And user preferences for reduced motion/high contrast are respected.
Offline/Low-Connectivity Reveal Controls
"As a rural caregiver, I want to peek at a patient’s medication list when offline so that I can continue care without compromising security."
Description

Allow controlled reveals when offline by requiring recent re-auth and device passcode/biometric, storing minimal field data in an encrypted, ephemeral cache with short TTL and wipe-on-exit. Queue audit logs and justifications for secure sync with retries and conflict handling. Respect policy flags that disallow offline reveals for certain fields. Provide clear, actionable messaging when a reveal is blocked due to policy or connectivity and offer a break-glass path if enabled by admins.

Acceptance Criteria
Offline Peek with Recent Re-Auth and Biometric Gate
Given the device is offline or has <20% successful requests over the last 30 seconds When a signed-in user attempts to reveal a sensitive field Then the app requires the user has re-authenticated to CarePulse within the last 15 minutes And the app requires a successful device-level biometric or passcode verification within the last 30 seconds And if either check fails the user is prompted to complete the required check and the reveal is blocked until successful And upon success only the requested field is revealed for a maximum of 20 seconds And a local audit stub is created without storing the field value, capturing userId, deviceId, patientId, fieldId, timestamp, and reveal duration
Ephemeral Encrypted Cache TTL and Wipe-on-Exit
Given a sensitive field is revealed while offline Then only the minimal field value and fieldId metadata are cached encrypted with AES-256-GCM using a key bound to the OS secure keystore/Keychain And the cache entry has a TTL of no more than 5 minutes And the cache is wiped upon app backgrounding, OS lock, logout, force-close, or TTL expiry whichever occurs first And cached data is excluded from device backups and inter-process access and is scoped per user and tenant And after wipe any subsequent reveal requires re-auth and re-fetch when connectivity is available And in-memory buffers holding decrypted values are zeroized immediately after the reveal window ends
Policy-Blocked Offline Reveal Messaging
Given a field has policy flag offlineRevealAllowed=false And the device is offline When the user attempts to reveal the field Then no sensitive value is displayed And the user sees a non-sensitive, localized message stating the reveal is blocked by policy and suggesting next steps (try when online or request break-glass if available) And the message meets WCAG AA contrast and is screen-reader accessible And a one-tap retry becomes enabled automatically when connectivity is restored And the blocked attempt is recorded in the local audit queue without the field value
Audit Log Queueing, Justification Capture, Secure Sync
Given any offline reveal attempt (allowed or blocked) When the user proceeds (or is blocked) Then the app captures an audit payload with userId, deviceId, patientId, fieldId, eventType (Reveal/Blocked/BreakGlass), timestamp (UTC), duration (if revealed), reason code, and free-text justification if provided (required for break-glass, min 20 characters) And the payload is encrypted at rest and assigned a client-generated UUID and content hash And the audit queue persists across app restarts and OS reboots And upon connectivity restoration the queue syncs within 30 seconds using exponential backoff (1s, 2s, 4s, … up to 5 minutes) until server ACK is received And duplicate events are de-duplicated by UUID while preserving chronological order And if the server rejects due to policy conflict the event is marked Denied locally and retained immutably And any unsynced events older than 24 hours trigger an in-app alert to the user to connect for compliance sync
Admin-Enabled Break-Glass Override
Given a field is blocked for offline reveal by policy And the tenant has breakGlassEnabled=true for the user's role When the user selects Request Break-Glass Then the app requires step-up authentication (successful biometric plus app PIN within the last 60 seconds) And requires a justification free-text of at least 20 characters And displays a high-visibility warning indicating the access will be audited And upon success the field is revealed for at most 60 seconds and cannot be copied or screenshotted if OS policy allows And a high-severity audit event is queued for priority sync and admin notification upon connectivity And per-user rate limits apply: maximum 3 break-glass reveals per 24 hours (configurable 1–10); exceeding limits blocks further requests and logs the attempt
Connectivity Transitions and Cache Revalidation
Given previously cached sensitive fields exist from offline reveals When network connectivity is restored or changes state (including captive portal conditions) Then the app treats captive portals as offline until portal authentication succeeds And before any subsequent reveal the app revalidates server policy and ETag/version for the field And if the field value or policy has changed or offlineRevealAllowed=false the cache entry is invalidated and the user must re-fetch online And reconnecting does not extend or reset the local cache TTL And reveals performed during state transitions still honor reveal time limits and auditing without data loss
Rate Limiting, Lockouts, and Environment Checks
Given repeated failed step-up authentications for reveal or break-glass When failures reach 5 within 10 minutes Then the app enforces a 5-minute cooldown before another attempt And if the app has been idle for more than 2 minutes a fresh app re-auth is required before the next reveal And if the device is detected as rooted/jailbroken or an MDM policy disallows offline reveals all offline reveal attempts are blocked with compliant messaging and are audited locally And no sensitive content or values are written to system logs or crash reports

Role Blueprints

Pre‑built, scenario‑based permission sets (After‑Hours Triage, RN Review, Intake Start‑of‑Care) that align with payer and policy rules. Admins assign or schedule blueprints per shift, and users see only the clients and modules relevant to their role and timeframe. Cuts setup time and reduces misconfigurations that lead to audit risk.

Requirements

Blueprint Catalog & Versioning
"As an admin, I want to select and safely customize prebuilt, policy-aligned role blueprints so that I can deploy correct permissions quickly without starting from scratch."
Description

Provides a catalog of pre‑built, payer- and policy-aligned role blueprints (e.g., After‑Hours Triage, RN Review, Intake Start‑of‑Care) with versioning, metadata (payer, state, effective dates), and dependency mapping to CarePulse modules. Supports cloning and safe customization, diff/compare between versions, deprecation notices, and compatibility validation. Delivers seeded defaults per region and payer, and exposes CRUD APIs to manage blueprint lifecycle. Ensures multi-tenant isolation and secure distribution across agency branches.

Acceptance Criteria
Catalog Filtering & Metadata Visibility
Given I am an Admin with Catalog access and my tenant region is configured, When I open the Blueprint Catalog and filter Payer=Aetna, State=TX, Effective Date=2025-10-01, Then only blueprints whose metadata includes payer=Aetna AND state includes TX AND the effective date range includes 2025-10-01 are returned, And each result displays name, version, payer, state(s), effective start/end dates, and status. Given seeded defaults exist for my region and payer, When I clear all filters, Then seeded blueprints appear in the first page with a "Seeded" badge and the initial response time is ≤500 ms for up to 50 results. Given a blueprint is deprecated effective 2025-09-01, When today's date is 2025-09-05 and the filter is set to Active, Then the deprecated blueprint is excluded from results and shows a "Deprecated" badge when the Status=All filter is used.
Blueprint Version Lifecycle Management
Given I create a new blueprint from scratch, When I save it, Then it is versioned as v0.1 in Draft state with audit fields (createdBy, createdAt) recorded and immutable. Given a Draft v0.1 exists, When I publish it, Then it becomes v1.0 in Published state and a publish log entry with publisher, timestamp, and changelog note is stored. Given Published v1.0 exists, When I make a minor change and save as new version, Then v1.1 is created as Published (or Scheduled if future-dated) while v1.0 remains immutable and available for rollback. Given a Published version has an end date set to 2025-09-01, When the date is 2025-09-05, Then its status auto-transitions to Deprecated and consumers receive a deprecation warning in UI and API responses.
Clone and Safe Customization with Dependency Validation
Given a Published blueprint v1.0 exists with module dependency constraints, When I clone it into my tenant, Then a new Draft blueprint is created with a new ID, metadata copied, and a reference to source version, while the original remains unchanged. Given compliance-locked permissions exist in the source blueprint, When I attempt to modify a locked permission in the cloned Draft, Then the system blocks the change with the message "Permission is compliance-locked" and no changes are saved. Given module dependency constraints require CarePulse Visits>=3.2.0, When I attempt to publish the Draft while my tenant has Visits=3.1.9, Then publishing is blocked and the validation report lists the failing dependency with required versions and remediation guidance.
Diff and Compare Between Versions
Given two versions v1.0 and v1.1 of the same blueprint exist, When I open the Compare view, Then a side-by-side diff highlights added/removed permissions, scope changes, module dependency changes, and metadata changes, and a summary shows total adds/removes/changes. Given I export the diff, When I select JSON format, Then a file downloads that conforms to the documented diff schema with semantic change types and includes version identifiers and timestamps. Given I lack access to v1.1, When I attempt to compare v1.0 to v1.1, Then access is denied and no details of v1.1 (names, permissions, metadata) are revealed.
CRUD APIs with Multi-Tenant Isolation
Given I am authenticated as Tenant A Admin, When I call GET /api/v1/blueprints, Then only Tenant A-owned blueprints and globally seeded defaults are returned and no Tenant B data is present. Given I am authenticated as Tenant A Admin, When I POST /api/v1/blueprints with a valid payload, Then the API returns 201 Created with the new blueprint ID and version v0.1 Draft and the audit trail records my user ID and timestamp. Given I am authenticated as Tenant B Admin, When I GET /api/v1/blueprints/{tenantA_id}, Then I receive 404 Not Found and no indication of existence is leaked. Given rate limits are 120 requests/min per tenant, When I exceed the limit on any blueprint endpoint, Then the API returns 429 Too Many Requests with a Retry-After header.
Secure Distribution to Branches
Given my agency has Branch X, Branch Y, and Branch Z with branch-level RBAC, When I distribute Published blueprint v1.0 to Branch X and Branch Y, Then users in Branch X and Y can view/assign it and users in Branch Z cannot see it. Given auto-update is enabled for distributed blueprints, When the upstream blueprint updates from v1.0 to v1.1, Then Branch X and Y receive v1.1 within 15 minutes and the assignment history records which version was active at each time. Given local overrides are disabled by policy for branches, When a Branch Admin attempts to edit a distributed blueprint, Then the system prevents edits and prompts to request a tenant-level clone instead.
Shift-based Assignment & Scheduling
"As an admin, I want to assign blueprints to users by shift with recurring schedules so that access automatically matches who is on duty and when."
Description

Allows admins to assign and schedule role blueprints to users or groups by shift with time‑zone awareness, effective start/end times, recurrence (e.g., weekly patterns), and exceptions. Resolves conflicts via deterministic precedence rules and provides preview of resulting access. Integrates with scheduling and routing so assignments follow live rosters and handoffs. Propagates changes to mobile within 60 seconds and auto-revokes access at shift end. Includes notifications and APIs for bulk assignment.

Acceptance Criteria
Assign Blueprint by Shift with Time‑Zone Awareness
Given an admin selects a user or group, a role blueprint, a local time zone, and a shift start/end (which may cross midnight) When the admin saves the assignment Then the assignment is stored with UTC-normalized boundaries and the original local time zone, and the UI/API echo back both local and UTC times And the assignment becomes active strictly by local wall-clock time regardless of the admin’s or user’s device time zone And shifts spanning midnight are treated as a single continuous access window And on daylight saving transitions, access starts and ends at the specified local times (duration may vary), with no overlap or gap beyond the wall-clock range And validation prevents saving if required fields are missing or if the time window is zero-length
Weekly Recurrence with Date Exceptions
Given an admin creates a weekly recurrence (e.g., Mon–Fri 19:00–07:00) with an effective start and end date in a specified time zone And the admin adds date-specific exceptions (skip or override times/blueprint) for certain calendar dates When the schedule is saved Then occurrences are generated only within the effective date range and anchored to the local time zone And exception dates suppress or replace the generated occurrence for those dates And the API/preview lists the concrete occurrences for the next 8 weeks including applied exceptions And validation blocks contradictory exceptions (e.g., overlapping overrides for the same date)
Deterministic Conflict Resolution Precedence
Rule 1: Explicit user assignments take precedence over group assignments for overlapping windows Rule 2: For two assignments targeting the same principal type, the assignment with the narrower time window overrides a broader overlapping window for the overlapping interval Rule 3: If still tied, the assignment with the most recent updatedAt timestamp takes precedence Rule 4: Precedence is resolved per minute; results are contiguous segments with a single winning blueprint per segment Given two or more overlapping assignments When effective access is computed for any timestamp within the overlaps Then the selected blueprint follows Rules 1–4 and the decision is emitted in the audit log with winning assignment id and rule applied
Live Roster Integration and Shift Handoffs
Given a caregiver’s shift is reassigned to another caregiver in the scheduling/roster system effective at 15:30 local time When the integration event is received Then access for the original caregiver is reduced to end at 15:30 and access for the new caregiver begins at 15:30 for the relevant clients/modules And only clients on the caregiver’s current live route are visible after the change And these changes are propagated to APIs and mobile within 60 seconds of the roster event And an audit record links the access changes to the roster event id
Mobile Propagation within 60s and Auto‑Revocation at Shift End
Given an assignment is created, updated, or ends When the event occurs Then all signed-in mobile apps for affected users receive an access update within 60 seconds (p95) and 120 seconds (p99) And devices offline during the event apply the new access on next sync before showing protected data And access is revoked within 60 seconds (p95) after the scheduled shift end for online devices, and immediately upon next sync for offline devices And revoked access removes client/module visibility and prevents API calls with stale tokens
Access Preview Before Save
Given an admin configures one or more assignments (including recurrences and exceptions) but has not saved When the admin opens Preview and selects a test timestamp or drags a time slider across a date range Then the preview shows the exact clients/modules and blueprint that would be in effect for the selected principal(s) at that moment And any conflicts are annotated with the precedence rule that will apply And after saving, the computed effective access for the same timestamps matches the preview results
Bulk Assignment API and User Notifications
Given an admin submits a bulk assignment request via API with up to 5,000 rows and a client-provided idempotency key When the request is processed Then successful rows create/update assignments, invalid rows are rejected with row-level errors, and the response reports counts for created, updated, skipped, and failed And retrying with the same idempotency key within 24 hours yields no duplicate assignments And affected users receive an in-app notification within 2 minutes of creation/update and 10 minutes before shift start (local time) if the shift is at least 30 minutes in the future And admins receive a completion summary with error details for failed rows
Contextual Access Filtering
"As a caregiver, I want the app to show only the clients and tools relevant to my active role and timeframe so that I can work efficiently and stay compliant."
Description

Enforces contextual filtering so users only see clients, visits, and modules permitted by their active blueprint and timeframe. Applies least‑privilege read/write scopes across scheduling, documentation, voice note capture, IoT data, and reports. Supports multiple concurrent assignments, on‑call scenarios, and offline mode with secure local caching and time‑boxed tokens. Handles denials with clear messaging and logs decisions for audit. Guarantees near‑real‑time updates without degrading app performance.

Acceptance Criteria
Active Blueprint Filters Visibility by Timeframe
- Given a user has the After-Hours Triage blueprint active from 18:00–06:00 local, when the user opens Clients at 21:00, then only clients assigned to that blueprint and timeframe are listed; all others are hidden. - Given the same user at 07:00 (outside the timeframe), when opening Clients, then no After-Hours clients are shown and a non-blocking banner indicates no active access for the current time. - Given the user opens Visit Schedule at 21:00, when filtering is applied, then only visits permitted by the active blueprint's programs/payers within 18:00–06:00 appear. - Given the admin changes the user's active blueprint at time T, when the device is online, then the app reflects the change within 10 seconds (p95) and 30 seconds (p99). - Given filtered lists contain items, when loading initial results, then render occurs within 500 ms (p95) on a mid-tier device; subsequent filter toggles render within 300 ms (p95).
Least-Privilege Scopes Across Modules
- Rule: Effective read scope = union of reads from all active blueprints constrained by each blueprint's client/visit/timeframe filters; effective write scope = intersection across active blueprints for the resource; on conflict, deny by default. - Given a user with read-only Scheduling and writable Documentation, when attempting to create a new visit, then the action is denied with a rationale message and no server mutation is sent. - Given the same user edits a visit note for an in-scope client within the timeframe, then save succeeds and the audit log records the write scope and source blueprint ID. - Given the user attempts to view IoT vitals for an out-of-scope client, then no data is fetched and a permission message is shown; no network call containing PHI occurs. - Given the user attempts to capture a voice note for an out-of-scope client, then recording is blocked before initialization and no media is written to storage.
Multiple Concurrent Assignments Handling
- Given two active blueprints A (Medicare) and B (Medicaid) overlapping 20:00–22:00, when opening Reports at 21:00, then only reports for clients in A or B are listed; write actions are enabled only where both A and B permit writes for the module. - Given A allows editing documentation and B denies it for client X, when editing client X's note, then the action is denied with a least-privilege conflict message and both blueprint IDs are logged. - Given A ends at 21:30, when time crosses 21:30, then access recalculates within 10 seconds; items solely from A are removed from write scope and from read scope if no other active blueprint includes them. - Given overlapping assignments yield zero effective permissions for a module, when the user navigates to that module, then the module tile is hidden; if deep-linked, a denial page is shown.
On-Call Escalation and Scheduled Blueprint Switching
- Given an admin schedules an On-Call blueprint for User U from 18:00–06:00, when device time reaches 18:00±5s, then the on-call blueprint activates locally without requiring app restart. - Given activation, when notifications arrive or lists refresh, then only on-call eligible clients and modules are visible; previously visible non-overlapping content is hidden within 10 seconds. - Given the admin unschedules the blueprint mid-shift at time T, when the device is online, then access is revoked within 10 seconds (p95) and cached items outside remaining scopes are purged within 60 seconds. - Given the user escalates a case to RN Review, when escalation completes, then a temporary RN Review scope is granted for the specific client and visit for 30 minutes and auto-expires thereafter.
Offline Mode with Secure Cache and Time-Boxed Tokens
- Rule: Offline access is limited to items within scope at last successful sync; each scope is bound to a signed token with a max lifetime of 8 hours and per-item expiry timestamps. - Given the device goes offline, when the user opens Documentation for an in-scope client, then content loads from encrypted local storage; writes are queued with immutable timestamps and scope token IDs. - Given the user attempts to access an out-of-scope client while offline, then a Not Permitted Offline message is shown and no cached PHI is revealed. - Given a scope token expires at time E while offline, when the user attempts access after E, then access is blocked and cached content is logically shredded (crypto-keys removed) within 60 seconds. - Given the device reconnects, when queued writes sync, then the server validates each write against the scope token and original timeframe; invalid items are rejected with clear error and remain unreadable locally; audit logs include an offline flag.
Access Denial Messaging and Audit Logging
- Rule: All denied actions must produce an in-app message with action, resource, reason, and how to request access; messages must not expose PHI about out-of-scope resources. - Given any deny decision, when it occurs, then an immutable audit record is created containing user ID, active blueprints, decision, resource identifiers (hashed where PHI), UTC timestamp, device ID, online/offline status, and correlation ID. - Given audit volume of 10k decisions/hour, when exporting audit logs for a date range, then results are available within 60 seconds and are cryptographically signed for integrity. - Given a deep link to an out-of-scope resource is opened, then the app displays a denial screen with correlation ID and a one-tap request access flow that logs the request event.
Real-Time Update Propagation and Performance Guarantees
- Rule: Changes to blueprint assignments propagate to connected clients within 10 seconds (p95) and 30 seconds (p99); API endpoints return within 300 ms (p95); client list renders within 500 ms (p95) on mid-tier devices; background CPU < 10% avg; incremental memory from filtering < 100 MB. - Given 5,000 clients and 20 concurrent assignments per user, when applying contextual filters, then memory usage remains under 250 MB and main-thread long tasks are < 50 ms (p95). - Given 1,000 simultaneous users, when admins perform bulk assignment changes, then ≤1% update events miss the p95 target and missed events reconcile on the next heartbeat (≤60 seconds). - Given network latency of 200 ms and 2% packet loss, when fetching filtered data, then retries and caching deliver p95 data freshness under 15 seconds without duplicate records.
Policy Rule Engine Integration
"As a compliance manager, I want policy rules encoded into blueprints so that permissions stay aligned with payer and state requirements as they change."
Description

Encodes payer and policy constraints as rules mapped to each blueprint, including documentation requirements, visit types, supervision limits, and geographic restrictions. Evaluates rules at assignment time and at runtime where needed, honoring effective dates and jurisdiction. Provides a test harness, rule packs by payer/state, and safe updates with staged rollout. Surfaces validation errors to admins and flags impacted blueprints when policies change.

Acceptance Criteria
Assignment-Time Evaluation for Blueprint Scheduling
Given an admin schedules Blueprint "After‑Hours Triage" for caregiver C covering clients in TX with payer "Medicare" and the rule pack "TX Medicare v2025.10" is effective on the shift date When the admin saves the assignment for 19:00–07:00 Then the engine evaluates all mapped rules (visit types allowed, documentation prerequisites, supervision limits, geographic constraints) for jurisdiction=TX and effective dates And blocks the save if any rule would be violated, showing an aggregated list of violated rule IDs, titles, and affected clients And allows save only if all violations are resolved or the rule is marked AllowOverride=true and an override reason is entered And records an audit log entry with admin ID, blueprint ID, rule IDs, timestamp, and outcome (allowed/blocked/overridden)
Runtime Evaluation During Visit Check-In/Out
Given caregiver C with assigned Blueprint "After‑Hours Triage" starts a visit for client X in TX under payer "Medicare" When C attempts to check in outside the allowed visit window or with a disallowed visit type per rules Then check-in is blocked with an error referencing the specific rule ID and requirement And an audit event is stored with GPS, timestamp, and rule ID When C attempts to submit the visit note missing required fields per rules Then submission is blocked with a list of missing fields and associated rule IDs When all required fields are provided and the visit type/time is compliant Then check-in and submission succeed
Documentation Requirements by Payer and Visit Type
Given rule pack "CA Medicaid v3.2" requires fields "Consent Signature", "Medication Reconciliation", and "Start-of-Care Checklist" for visit type "Start of Care" effective 2025-01-01 in CA When a user completes a Start of Care visit in CA with service date within the effective range Then those fields are marked required and real-time validation prevents submission until they are valid And the validation message includes the rule ID and effective date When the visit type is "Routine Follow-Up" Then only the fields mandated for that visit type are required per the rule pack And a compliance report for the visit shows "0 missing required fields" upon successful submission
Supervision Limits Enforcement
Given rules state "LPN cannot perform Start of Care without RN co-sign within 24 hours" and "Max 4 concurrent LPNs per RN" for jurisdiction=NY When assigning an LPN to a Start of Care visit in NY Then the system requires an RN reviewer to be assigned and enforces the RN:LPN ratio And blocks scheduling if the ratio would be exceeded, showing rule IDs and current counts When the LPN submits the note Then submission remains pending until RN co-sign is captured within 24 hours, after which it is marked complete And a breach alert is created if the 24-hour SLA is exceeded
Geographic Restriction Enforcement via GPS Geofence
Given rules restrict check-in to within 200 meters of the client's verified address with AllowOverride=false When a caregiver attempts to check in at 350 meters distance Then check-in is blocked with a geofence violation error referencing the rule ID and measured distance And the event is logged with GPS sample, accuracy, and distance When a rule with AllowOverride=true is active and the caregiver provides required reason and evidence Then a one-time override permits check-in and the override details are audit-logged
Rule Pack Staged Rollout with Test Harness Gate
Given a new rule pack "TX Medicare v2025.10" is uploaded to the Staged environment When the built-in test harness is executed Then it runs the payer/state fixtures for assignment-time, runtime, documentation, supervision, and geofence cases and reports pass/fail with counts And promotion to Production is blocked until all tests pass When promoted Then the system requires a changelog, increments the semantic version, allows targeting specific org IDs, and logs the actor and timestamp When a rollback is initiated Then the prior version is restored and affected orgs are notified in-app and by email
Policy Change Impact Detection and Admin Notifications
Given a staged update changes allowed visit types for TX Medicare When the update is saved Then the system computes and lists impacted blueprints with counts of affected assignments and clients, and flags them in the UI And sends notifications to org admins with a summary and link to a revalidation workflow When an admin attempts to schedule using an impacted blueprint before revalidation Then a blocking warning is shown requiring revalidation or explicit acknowledgement with reason, and the action is audit-logged
Audit Logging & One-Click Report
"As an auditor/compliance lead, I want one-click reports of who had access to what and why so that I can satisfy audits and investigate incidents quickly."
Description

Captures immutable logs of blueprint creation, edits, approvals, assignments, activations, revocations, and access decisions with user, timestamp, device, and policy version. Generates one‑click, audit‑ready reports that align to payer audit checklists and include scope of access during specific time windows. Supports retention policies, export (PDF/CSV), and API retrieval, and integrates with CarePulse reporting dashboards and alerts.

Acceptance Criteria
Immutable Blueprint Lifecycle Event Logging
Given an admin performs a blueprint lifecycle action (create, edit, approve, assign, activate, revoke) When the action is completed Then an audit log entry is created capturing event_type, blueprint_id, actor_user_id, target_user_id (if applicable), timestamp_utc (ISO 8601), device_id, device_type, ip_address, policy_version, request_id, and outcome And Then the entry is immutable: any attempt to modify or delete via UI or API is rejected with 405 and an additional audit event records the prohibited change attempt And Then the entry is retrievable via audit UI and API within 5 seconds of the source action
Access Decision Logging with Rule Traceability
Given a user governed by a scheduled Role Blueprint requests access to a client record or module When the access engine evaluates the request Then an audit entry is recorded for both allow and deny decisions capturing user_id, blueprint_id, blueprint_version, rule_id, resource_type, resource_id or scope_expression, decision (allow/deny), effective_window_start, effective_window_end, evaluation_reason, timestamp_utc, device_id, and ip_address And Then querying by user_id and a 24-hour window returns only matching entries and completes within 2 seconds for up to 10,000 results And Then denied decisions include the specific rule_id or policy_reason that produced the deny outcome
One-Click Audit Report Generation (PDF and CSV)
Given an admin selects a payer audit checklist template and a date/time range When they click Generate Audit Report Then the system produces both PDF and CSV outputs within 60 seconds for up to 100,000 audit events And Then the report includes sections for blueprint lifecycle events, approvals (with approver identity), access decisions, and scope of access by user and time window And Then each section is tagged with the payer checklist mapping identifiers in the report metadata And Then timestamps display in the agency time zone with UTC offset while original UTC values are preserved in metadata And Then the downloadable links are role-restricted and expire 24 hours after generation
Retention Policy Enforcement and Purge Auditing
Given a tenant retention policy is configured (e.g., 7 years active, 1 year archive) When an audit log exceeds its retention window Then the system purges or archives the record per policy and excludes it from API and report outputs And Then the purge is recorded as a non-identifying tombstone event capturing purge_id, policy_id, timestamp_utc, and count_purged; original content is irrecoverable via UI and API And Then any change to retention configuration is itself audited with admin user, old_value, new_value, and timestamp_utc
Audit Log API Retrieval and Filtering
Given an authenticated admin with audit.read scope calls GET /api/audit/logs with valid filters (date_from, date_to, user_id, event_type, blueprint_id, decision) When the request is processed Then the API returns 200 with a paginated, descending timestamp-ordered list containing the required fields And Then pagination is cursor-based with a next token, supports page_size up to 1000, and yields stable results for a provided snapshot_id And Then invalid filters return 400 with field-specific errors; unauthorized access returns 403; and rate limits return 429 with a Retry-After header And Then GET /api/audit/logs.csv with the same filters returns records identical in count and content to the JSON response
Time-Window Scope Accuracy in Reports
Given a user is assigned a Role Blueprint scheduled for a shift with defined start and end When an audit report is generated for that time window Then only clients and modules within the blueprint’s effective scope during that window are listed for that user And Then overlapping shifts, mid-shift reassignments, activations, and revocations are reflected with precise start/end timestamps per scope segment And Then daylight saving transitions are handled correctly with no duplicated or missing entries; calculations are performed in UTC and displayed in the agency time zone
Dashboard Integration and Alerting on Audit Signals
Given CarePulse dashboards ingest audit metrics When N (default 5) denied access decisions occur for the same user within 10 minutes Then an alert is generated, displayed on the dashboard, and sent via the configured channel including user_id, timeframe, count, and a deep link to the filtered audit view And Then the dashboard tile Role Blueprint Audit Health shows 7-day counts for lifecycle changes, approvals, denials, and report generations that match API totals for equivalent filters And Then acknowledging an alert records an audit entry capturing actor_user_id, timestamp_utc, and resolution_note
Misconfiguration Guardrails & Simulator
"As a new agency admin, I want guardrails and a simulator when configuring roles so that I can avoid misconfigurations that cause audit risk."
Description

Adds a guided setup with scenario selection, permission preview, and a simulator that runs a “view as user” session for a defined timeframe to validate access before deployment. Detects misconfigurations (e.g., excessive PHI scope, missing documentation rights) with explainable warnings and suggested fixes. Includes best‑practice presets, inline documentation, and a policy‑lint that blocks risky publishes unless explicitly overridden.

Acceptance Criteria
Guided Scenario Selection & Preset Loading
Given an admin opens the Misconfiguration Guardrails wizard from Role Blueprints When the admin selects one of the scenarios: "After‑Hours Triage", "RN Review", or "Intake Start‑of‑Care" Then the corresponding best‑practice preset loads with default permissions, time‑bound scope, and module access pre‑populated And the inline documentation panel displays rationale and policy references for the selected scenario And the preset loads within 2 seconds on a 3G/low‑bandwidth connection And the admin can modify preset fields prior to preview or simulation
Permission Preview Completeness & Accuracy
Given a blueprint preset is loaded or has been edited When the admin opens Permission Preview Then the preview enumerates: visible clients (count and list), enabled modules (list), PHI fields accessible (list), and restricted actions (list) And differences from the scenario preset are highlighted inline And exporting the preview to PDF and CSV produces files within 5 seconds that exactly match the on‑screen contents And any configuration change updates the preview within 1 second without page reload
Time‑Bound "View as User" Simulator Session
Given the admin selects a target user (or role blueprint) and defines a simulation timeframe (start/end) When the admin starts the simulator Then the system renders a read‑only session showing only clients, routes, notes, and modules available within the defined timeframe And attempts to perform write actions are intercepted with a non‑blocking notice that simulation is read‑only And a simulation activity log records start time, user/role, timeframe, and accessed modules And the simulator auto‑ends after 15 minutes of inactivity or upon explicit exit, returning the admin to the wizard
Detection of Excessive PHI Scope with Explainable Warning
Given a blueprint configuration grants access to all clients or PHI fields beyond the selected scenario’s minimum scope When policy‑lint runs on Save, Preview, or Simulate Then a High‑severity finding "Excessive PHI scope" is displayed with: affected dimensions (clients, PHI fields), counts, and rule references And a Suggested Fix option is provided to constrain scope to assigned clients and shift timeframe And applying the Suggested Fix updates the configuration and removes the finding after re‑lint within 1 second And the change is captured in the audit trail with before/after diffs
Detection of Missing Documentation Rights with Suggested Fix
Given a scenario that requires completing visit documentation (e.g., RN Review, Intake Start‑of‑Care) When required permissions (Create/Edit/Sign Visit Notes and Upload Attachments) are absent Then a Medium‑severity finding "Missing documentation rights" lists each missing permission and impacted workflows And a one‑click "Add Required Rights" action grants the minimum necessary permissions And re‑running lint after applying the fix shows no remaining findings of this type
Policy‑Lint Blocking Risky Publishes with Explicit Override
Given unresolved High‑severity findings exist for the current blueprint When the admin clicks Publish Then the publish action is blocked with a summary of findings and links to details And the admin may choose Override, which requires MFA confirmation and a justification of at least 20 characters And on successful override, an audit entry records user, timestamp, findings overridden, justification text, and MFA method And the system displays a "Published with override" banner and marks the release accordingly
Inline Documentation & Best‑Practice Links Are Contextual
Given the admin focuses a setting, permission item, or warning in the wizard When the admin clicks or hovers the Info control Then a help panel/tooltip appears within 300 ms with plain‑language guidance and links to payer/policy documentation And the content meets WCAG 2.1 AA for contrast and keyboard navigation And external links open in a new tab and are tracked in analytics without blocking the UI
Break-Glass Emergency Access
"As a supervisor, I want a controlled break-glass process for emergencies so that patient care isn’t blocked while maintaining full accountability."
Description

Provides a controlled, time‑limited emergency override (“break‑glass”) that grants elevated access with mandatory justification, optional supervisor approval, and automatic revocation. Sends real‑time alerts to compliance, logs all actions with enriched context, and enforces scope caps (e.g., client count, duration). Supports geofencing, offline issuance with delayed sync, and post‑event review workflows.

Acceptance Criteria
Emergency Override Request with Mandatory Justification
Given a user lacks sufficient permissions under their current Role Blueprint And the organization has Break‑Glass enabled When the user initiates a break‑glass request Then the system requires a justification text (minimum 20 characters, maximum 1000) And the Submit action remains disabled until the justification meets length requirements And the system captures contextual metadata (user_id, role, blueprint_id, timestamp, device_id, IP, location if available) And the user must select a requested scope (clients, modules, duration) from policy-allowed options before submission And the request cannot be submitted without an explicit acknowledgement of policy and audit logging
Supervisor Approval for High‑Risk Scopes
Given organization policy defines high‑risk thresholds for break‑glass (e.g., duration > policy.max_duration_soft, client_count > policy.max_clients_soft, or modules classified as Sensitive) When a user submits a break‑glass request exceeding any high‑risk threshold Then the system routes the request to the on‑call supervisor or approval group via in‑app notification and configured channels And blocks elevation until an Approve or Deny decision is recorded or the request times out per policy And records approver identity, decision, timestamp, and optional approver note in the audit log And if Denied or Timed Out, the requester is notified and the session is not elevated And if Approved, the approved scope (clients, modules, duration) is bound to the session and cannot exceed policy hard caps
Time‑Limited Access Auto‑Revocation
Given a break‑glass session has been approved and started with a defined start_time and duration When the session duration elapses or the user manually ends the session Then elevated permissions are revoked across all active sessions within 5 seconds And the user interface removes any elevated access indicators and restores the prior Role Blueprint permissions And any new elevated-only actions initiated after revocation are blocked with an Expired message And the system logs the revocation event and sends a closure alert to the compliance channel And if the session is force‑revoked by an admin, the same revocation, UI restoration, and logging occur immediately
Scope Caps Enforcement (Clients, Modules, Duration)
Given organization policy defines hard caps for max_duration, max_client_count, and allowed_modules for break‑glass When a user configures a requested scope Then the system validates the request against hard caps before submission And prevents submission if any cap would be exceeded, displaying specific errors (e.g., "Reduce client list to ≤ policy.max_client_count") And at runtime, only the approved clients and modules are visible; all other clients/modules remain hidden And attempts to access modules not in approved scope are blocked and logged And the session duration cannot be extended beyond the approved value without initiating a new request
Alerts, Audit Logging, and Post‑Event Review
Given any break‑glass lifecycle event occurs (request, approval/denial, session start, elevated actions, revocation) When the event is processed Then the system emits a real‑time alert to the designated compliance channel(s) within 10 seconds (configurable) And writes an immutable audit record containing user_id, role, blueprint_id, request_reason, requested_scope, approved_scope, approver_id (if any), timestamps (request, decision, start, revoke), device_id, IP, geo (if available), and list of client_ids/actions touched And after revocation, a review task is created and assigned per policy with a due date within 24 hours (configurable) And when a reviewer records an outcome (Justified, Unjustified, Needs Follow‑up) with notes, the task status and outcome are stored and linked to the event And if the review is not completed by due date, an escalation alert is sent to the next‑level compliance contact
Geofenced Access Enforcement
Given geofence rules are configured (e.g., within radius of client address or agency office) When a break‑glass session is requested or starts Then the system evaluates the device location against the configured geofence with accuracy metadata And if outside the allowed geofence, the system denies elevation or requires supervisor approval according to policy And during an active session, leaving the allowed geofence for more than 2 minutes suspends elevated actions until back in bounds or session ends And all geofence evaluations (pass/fail, coordinates, accuracy, timestamps) are recorded in the audit log
Offline Break‑Glass Issuance and Delayed Sync
Given the device is offline when a break‑glass request is needed When the user initiates an offline break‑glass request Then the app enforces offline policy caps (reduced max_duration and max_client_count as configured) and requires justification And upon local approval (self or cached policy), the session starts locally and is flagged as Offline Issued And all events and actions are stored locally with encryption until connectivity is restored And the session auto‑revokes locally when the timer expires even if still offline And upon reconnect, the device syncs the full audit trail in order, triggers alerts, and flags the event for expedited review if it would have required supervisor approval online And if sync conflicts occur (e.g., overlapping sessions), the system preserves the offline record, resolves conflicts deterministically, and notifies compliance

Step‑Up MFA

Adaptive multi‑factor prompts that trigger only when risk rises—new device, unusual time, or elevated scope. Supports biometric, push, and hardware key options for quick, secure verification on low‑end phones. Keeps routine actions frictionless while adding extra protection to sensitive operations.

Requirements

Adaptive Risk Scoring
"As a caregiver, I want MFA prompts only when something looks unusual so that my routine check-ins stay fast while suspicious access is still blocked."
Description

Implement a lightweight, real‑time risk engine that evaluates each login and sensitive action using contextual signals (new/unknown device fingerprint, session age, time-of-day anomalies, geo/IP reputation variance, role/scope of requested operation, and recent failed attempts). The engine assigns a risk band that determines whether to suppress, require, or escalate MFA. Designed for mobile-first, low-end devices with minimal CPU/memory impact and privacy-by-design (no storage of raw biometrics or precise location). Exposes a policy API consumable by CarePulse mobile and web clients, and returns decision metadata for downstream logging and analytics. Expected outcome: frictionless access under normal conditions with automatic, consistent prompts when risk increases.

Acceptance Criteria
Normal Login from Known Device — MFA Suppressed
Given a user logs in from a device fingerprint previously trusted for that account and the existing session age is <= 24 hours And the login occurs within the user’s habitual time window based on the last 30 days And the source IP ASN and coarse geo (city/region) match recent successful logins and the IP reputation risk <= 0.20 And there are 0 failed attempts in the past 10 minutes for this user/device And the requested operation scope is non-sensitive (e.g., view schedule, record standard visit note) When the client calls the Risk Policy API with contextual signals Then the API returns action = SUPPRESS_MFA and risk_band = LOW with confidence >= 0.80 And reasons includes ["known_device","habitual_time","stable_geo","clean_ip","no_recent_failures","low_scope"] And end-to-end decision latency (client→API→response) <= 150 ms at P95 And the client proceeds without prompting MFA
New Device Login — MFA Required
Given the user attempts to log in from a new or unknown device fingerprint not seen for this account within the last 90 days And all other signals are within normal ranges (habitual time, stable geo/IP, no recent failures) When the client calls the Risk Policy API Then the API returns action = REQUIRE_MFA and risk_band = MEDIUM with confidence >= 0.70 And reasons includes ["new_device"] And metadata.recommended_factors includes at least one available factor for the device (e.g., push, biometric, hardware key) And the client prompts for MFA accordingly
Elevated-Scope Operation — MFA Escalated
Given an authenticated user initiates a sensitive operation tagged with elevated scope (e.g., export compliance report with PHI, edit org-wide settings, view caregiver PII) When the client calls the Risk Policy API with the operation scope and role Then the API returns action = ESCALATE_MFA and risk_band = HIGH with confidence >= 0.75 And reasons includes ["elevated_scope"] And metadata.required_factor_strength = "phishing_resistant" and metadata.allowed_factors includes ["hardware_key","biometric"] and excludes ["password_only"] And if the device cannot support biometric/hardware key, metadata.fallback_factor = "push_number_match" And decision latency <= 150 ms at P95
Anomalous Time-of-Day — MFA Required
Given a login occurs in a time bucket with zero successful logins for this user in the prior 30 days or > 3 standard deviations from the user’s median login hour When the client calls the Risk Policy API Then the API returns action = REQUIRE_MFA and risk_band = MEDIUM with confidence >= 0.70 And reasons includes ["time_anomaly"] And upon the next successful login within the habitual window, the API returns action = SUPPRESS_MFA and risk_band = LOW (all else equal)
Geo/IP Reputation Variance — MFA Required or Escalated
Given the source IP reputation risk > 0.60 or the ASN differs from the last 5 successful logins or coarse geo indicates a jump > 500 km within 1 hour (impossible travel) When the client calls the Risk Policy API Then if exactly one of the above signals is present, the API returns action = REQUIRE_MFA and risk_band = MEDIUM with reasons listing the contributing signal And if two or more signals are present, the API returns action = ESCALATE_MFA and risk_band = HIGH with reasons listing all contributing signals And only coarse geo derived from IP is evaluated; no precise GPS coordinates are processed or stored
Recent Failed Attempts — MFA Required
Given there are >= 3 failed login attempts for this user or device in the last 10 minutes or >= 5 in the last 24 hours When the next login attempt succeeds Then the API returns action = REQUIRE_MFA and risk_band = MEDIUM with confidence >= 0.70 And reasons includes ["recent_failures"] And REQUIRE_MFA persists for 24 hours from the last failure even on a known device (unless a higher action is triggered)
Policy API Contract, Performance, and Privacy Constraints
Given clients call POST /risk/decide with minimal payload (hashed device_fingerprint, session_id, coarse_geo, timestamp, role, operation_scope, ip_reputation_risk, recent_fail_counts) When the request is valid Then the response includes correlation_id, policy_version, risk_band in ["LOW","MEDIUM","HIGH"], action in ["SUPPRESS_MFA","REQUIRE_MFA","ESCALATE_MFA"], confidence (0..1), reasons[], evaluated_signals[], decision_ttl_seconds, and generated_at (ISO8601) And API processing time <= 75 ms at P95 and <= 120 ms at P99; end-to-end decision latency from SDK on a low-end Android device (2GB RAM, 4-core) over 3G <= 200 ms at P95 And client-side CPU utilization spike <= 10% for <= 150 ms and additional memory footprint <= 5 MB And no raw biometric data or precise GPS location is accepted or stored; all identifiers (device_fingerprint, user_id) are salted+hashed; decision logs contain only pseudonymous identifiers And the schema is versioned; unknown fields are ignored; invalid payloads return HTTP 400 with error_code; server errors return HTTP 503 with retry-after; responses are JSON UTF-8 And if the API is unreachable, the SDK applies offline defaults: action = REQUIRE_MFA for sensitive scopes and SUPPRESS_MFA for routine scopes when no elevated local signals are present, with reason = "offline_fallback"
Factor Orchestration & Fallback
"As a caregiver using a low-end phone, I want quick biometric or push verification with a reliable fallback so that I can authenticate even with poor connectivity."
Description

Provide a flexible MFA layer that supports biometric (Android BiometricPrompt, iOS LocalAuthentication), in-app push approvals (FCM/APNs) with number matching, and FIDO2/WebAuthn hardware keys for web and compatible mobile browsers. Implement automatic factor selection based on device capability, connectivity, and user preference, with resilient fallbacks (TOTP, limited-use backup codes, resident keys) for low-end phones and offline scenarios. Enforce rate limiting, replay protection, and secure channel binding. Ensure accessibility (screen readers, haptics) and sub‑2s average prompt round-trip on typical 3G connections. Expected outcome: quick, reliable verification on a wide range of devices without locking out users.

Acceptance Criteria
Adaptive Factor Selection on 3G Connection
Given a signed-in user initiates a sensitive operation that requires step-up verification And the device reports its factor capabilities and current connectivity state (including 3G) And the user has saved factor preferences When the orchestrator selects a factor Then it chooses the highest-assurance available factor that satisfies capability, connectivity, and preference with precedence: Hardware Key > Biometric > Push > TOTP > Backup Code And the selected factor prompt is displayed within 500 ms of the orchestration request And the average end-to-end round-trip (prompt display to server verification response) over 20 trials on typical 3G (1–2 Mbps, 150–250 ms RTT) is ≤ 2.0 s, with P95 ≤ 3.0 s And if the selected factor is unavailable, times out (> 15 s), or is declined, a fallback list is shown within 300 ms without losing context And the user can switch to any eligible fallback within two taps/clicks And on success, the session is marked with a step-up claim and an audit event is recorded within 200 ms
Biometric Verification with Accessible Prompts
Given the device supports Android BiometricPrompt or iOS LocalAuthentication and has an enrolled biometric When step-up is required and biometrics is selected by orchestration Then a native system biometric prompt is invoked with user verification required (no custom biometric UI) And access is granted only on a positive match against a fresh server challenge bound to the current session And if biometrics are unavailable, locked out, or fail three consecutive attempts, a fallback list (Push/TOTP/Hardware Key/Backup Code) is presented within 1 s And VoiceOver/TalkBack announces the factor type and reason for verification within 1 s of prompt display, with correct focus order and labels And haptics provide success/failure feedback where supported And no biometric samples/templates leave the device; only a signed assertion bound to the server challenge is transmitted
Push Approval with Number Matching and Replay Protection
Given the user has a registered device with the CarePulse app and push permissions enabled, and the device is online When step-up is required and push is selected Then the server issues a push challenge via FCM/APNs containing a cryptographic nonce and session-binding data And the initiating client displays a 2-digit random code that must be entered on the mobile app before approval And the mobile app blocks one-tap approve until the correct code is entered And the challenge expires in 60 s; at most 1 active challenge per session and at most 3 active challenges per user across sessions And responses missing a valid nonce, signature, or session binding are rejected and logged; replayed responses are rejected And if delivery fails, device is offline, or no approval occurs within 5 s, the initiating client offers TOTP/Biometric/Hardware Key fallback immediately And rate limiting allows a maximum of 5 push challenges per user per 10 minutes; excess attempts are temporarily blocked and audited
FIDO2/WebAuthn Hardware Key Enrollment and Authentication
Given the browser reports WebAuthn support and the user has a compatible platform or roaming authenticator (CTAP2) When enrolling a hardware key, the client calls navigator.credentials.create with residentKey=preferred, userVerification=required, and attestation per policy (none or direct) Then the server verifies attestation/attStmt per policy, RP ID matches the deployment host, and the credential ID/public key are stored And on authentication, navigator.credentials.get requires userVerification=required; the server validates origin, RP ID, challenge signature, and signCount/clone detection And NFC/USB/BLE roaming authenticators work on compatible mobile browsers; unsupported environments automatically offer other factors And prompts time out after 30 s; on timeout or user cancel, a fallback list appears within 500 ms without losing context
Offline Access with TOTP and Backup Codes
Given the user has enrolled a TOTP authenticator When the device initiating step-up is offline or push/biometric/hardware key are unavailable Then a 6-digit TOTP per RFC 6238 with a 30 s time step and ±1 step clock skew is accepted And after 10 consecutive invalid TOTP attempts, the factor is temporarily locked for 5 minutes and an audit event is recorded And the user can use backup codes: 10 single-use codes, at least 10 characters, high-entropy alphanumeric And each backup code is invalidated immediately upon use; code regeneration invalidates all unused prior codes and displays the new set only once And backup codes are stored only as salted hashes server-side; plaintext is never retrievable
Global Rate Limiting and Abuse Mitigation Across Factors
Given any factor verification attempts are occurring for a user or IP When invalid or excessive attempts are detected Then per-user rate limits cap invalid verifications at 5 per 5 minutes per factor and 12 across all factors; per-IP limits cap at 20 per minute; thresholds are configurable And exponential backoff is applied on repeated failures, growing to a maximum cooldown of 15 minutes And error responses are generic and timing is padded to avoid information leakage about factor existence or correctness And all throttles, denials, and lockouts emit structured audit events with reason codes and correlation IDs
Secure Channel Binding and Anti‑Replay for Challenges
Given a step-up challenge is generated for any factor When the client responds to the challenge Then the response must include a server-issued, single-use, signed challenge tied to the user session and client origin/context And the server enforces one-time use and a maximum validity window of 60 s; expired or re-used challenges are rejected and audited And the accepted response establishes a cryptographic binding to the initiating session; mismatched bindings fail without altering session state And all challenge materials are deleted or marked spent immediately upon success/failure
Sensitive Action Step‑Up
"As an agency admin, I want extra verification when someone performs sensitive actions so that our data stays secure without slowing everyday work."
Description

Define and enforce step‑up MFA for high‑risk operations (e.g., exporting PHI, modifying completed visit records, changing caregiver credentials, altering compliance settings, viewing GPS route history, generating audit reports). Integrate with API and UI routes to require elevation when risk or operation scope warrants it. Grant short‑lived elevated tokens/claims after successful verification, with configurable TTL and per‑role policies. Support exemptions for recently elevated, trusted sessions to minimize friction. Expected outcome: stronger protection for critical workflows with minimal interruption to normal scheduling and documentation tasks.

Acceptance Criteria
Export PHI Step‑Up Enforcement
Given a user is authenticated but not elevated, When they initiate a PHI export via UI or API, Then a step‑up MFA challenge is required before proceeding. Given biometric, push, and hardware key factors are enabled and the user has at least one enrolled, When the user completes any enrolled factor, Then the export is permitted and an elevated claim/token is issued with the configured TTL. Given the step‑up challenge is failed, cancelled, or times out, When the user attempts to export, Then the operation is blocked and an audit event is recorded with outcome=failure and reason. Given an API export request without a valid elevated claim, When processed, Then respond HTTP 403 with error code STEP_UP_REQUIRED and do not generate a file. Given a successful export following elevation, When auditing, Then record user ID, action=PHI_EXPORT, factor type, device fingerprint, IP, timestamp, correlation ID, and outcome=success.
Modify Completed Visit Record Requires Elevation
Given a visit record has status=Completed, When a user attempts to edit and save changes via UI or API, Then a step‑up MFA challenge is required prior to save. Given per‑role policies are configured, When elevation is required, Then enforce the role’s required factor assurance and issue an elevated claim/token with the role’s TTL on success. Given an elevated claim is active on the same device/session within TTL, When subsequent edits to completed visits occur, Then no additional step‑up is required until TTL expiry. Given an API save request for a completed visit without a valid elevated claim, When processed, Then respond HTTP 403 with error code STEP_UP_REQUIRED and do not persist changes. Given a post‑elevation edit is saved, When auditing, Then record user ID, visit ID, fields changed (metadata only, no PHI values), timestamp, elevated=true, and correlation ID.
Change Caregiver Credentials Elevation
Given a user initiates a change to a caregiver’s login email, password, or MFA factors, When the action is submitted, Then a step‑up MFA challenge is required before the change is applied. Given per‑role factor requirements are defined, When prompting for step‑up, Then require a factor that meets or exceeds the role’s required assurance (biometric, push, or hardware key) and issue an elevated claim with TTL on success. Given the step‑up challenge fails or is not completed, When the change is attempted, Then the operation is blocked and an audit event is recorded with outcome=failure and reason. Given an API request to change caregiver credentials lacks a valid elevated claim, When processed, Then respond HTTP 403 with error code STEP_UP_REQUIRED and do not apply changes. Given a successful credential change after elevation, When auditing, Then log user ID, target caregiver ID, change type, factor type, timestamp, and correlation ID.
Alter Compliance Settings Elevation
Given a user attempts to modify compliance settings (e.g., visit lock rules, documentation requirements), When they click Save in UI or call the settings API, Then a step‑up MFA challenge is required before persisting changes. Given per‑role policies define TTL and factor strength, When elevation succeeds, Then issue an elevated claim bound to session/device with the configured TTL and allow the settings update. Given the elevated claim expires, When additional compliance changes are attempted, Then re‑prompt for step‑up before saving. Given a settings update request without a valid elevated claim, When processed by API, Then respond HTTP 403 with error code STEP_UP_REQUIRED and do not persist changes. Given any compliance setting is changed, When auditing, Then record user ID, setting keys affected, prior vs new values redacted flags, timestamp, elevated=true, and correlation ID.
View GPS Route History Risk‑Based Elevation
Given a user requests GPS route history, When the requested time range exceeds 24 hours or includes routes for multiple caregivers, Then require step‑up MFA before returning data. Given access occurs from a new device or at an unusual time window configured by policy, When route history is requested, Then require step‑up even if the time range is within 24 hours. Given risk signals are low (known device, business hours) and time range ≤ 24 hours for the actor’s own routes, When requested, Then do not prompt for step‑up. Given an API request for high‑risk route history without a valid elevated claim, When processed, Then respond HTTP 403 with error code STEP_UP_REQUIRED and do not return data. Given elevated access to route history is granted, When auditing, Then record user ID, scope (caregivers/time range), risk reason, factor type, timestamp, and correlation ID.
Generate Audit Report Elevation and Scope Control
Given a user requests generation of an audit report that includes PHI or agency‑wide scope, When the request is submitted via UI or API, Then a step‑up MFA challenge is required prior to report generation. Given elevation succeeds, When the report is generated, Then the download/view endpoints require a valid elevated claim and return HTTP 403 STEP_UP_REQUIRED if absent or expired. Given elevation occurred within TTL, When the user generates additional audit reports of equal or lesser sensitivity, Then no additional prompt is shown until TTL expires. Given a report request is cancelled or step‑up fails, When processing, Then no report is generated and an audit event is recorded with outcome=failure and reason. Given a report is generated and accessed after elevation, When auditing, Then log user ID, report type/scope, factor type, timestamp, and correlation ID.
Elevated Token TTL, Claims, and Exemptions
Given a step‑up verification succeeds, When issuing elevation, Then create an elevated claim/token bound to the user session and device fingerprint with an exp set to the configured TTL (e.g., 5–30 minutes per role policy). Given an elevated claim is active, When the user performs additional actions of the same or lower sensitivity, Then exempt the user from additional prompts until TTL expiry; actions of higher sensitivity may re‑prompt per policy. Given a material risk change occurs (e.g., IP change, device fingerprint change, role change), When detected, Then immediately invalidate the elevated claim and require re‑elevation on the next sensitive action. Given configuration for TTL and factor requirements is updated, When new elevations occur, Then apply the new policy immediately; existing elevated claims retain their original exp. Given any elevation is issued or invalidated, When auditing, Then record user ID, factor type, TTL, binding attributes, reason, timestamp, and correlation ID.
Device Trust & New Device Detection
"As a user, I want to be alerted and re-verified when my account is accessed from a new device so that I can block unauthorized access quickly."
Description

Introduce device registration and trust management to identify known devices and flag anomalies. Maintain a per-user trusted device list with metadata (OS, model, last seen), detect new/changed devices via fingerprinting and platform identifiers, and leverage optional device attestation (Play Integrity/SafetyNet, Apple DeviceCheck) plus jailbreak/root indicators where available. Trigger step‑up on first use, material changes, or risk spikes; notify users and admins on new device sign-ins; and provide self‑service revoke/rename. Designed to work on low-end Android devices with minimal storage and intermittent connectivity. Expected outcome: rapid containment of unauthorized access while reducing prompts on familiar devices.

Acceptance Criteria
New Device First Sign-In Step-Up and Trust Enrollment
Given a user with valid credentials signs in from a device not on their trusted list When primary authentication succeeds Then the user is prompted for step-up MFA before access is granted And the step-up prompt offers at least one available method supported on the device (biometric, push, or hardware key) based on capability and prior enrollment And if the user completes step-up successfully Then the device is added to the user's trusted list with metadata captured: OS name/version, device model, device fingerprint/identifier, attestation verdict (if available), jailbreak/root indicator, first seen timestamp (ISO 8601 UTC) And if the user cancels or fails step-up Then access is denied and the device is not added to the trusted list
Material Device Change Re-Verification
Given a previously trusted device attempts sign-in When a material change is detected (device fingerprint mismatch, device model change, OS major version change, attestation verdict degradation, or root/jailbreak newly detected) Then require step-up MFA before granting access And upon successful step-up Then update the trusted device record with new metadata and set last seen timestamp; retain history of prior fingerprint and change reason And upon step-up failure Then access is denied and an alert is generated per notification policy
Device Attestation and Integrity Signals Handling
Given the platform supports device attestation (Android Play Integrity/SafetyNet or Apple DeviceCheck) When a device signs in Then the system requests attestation and records the result with the device metadata And when attestation indicates compromised/low integrity or root/jailbreak is detected Then a step-up MFA is required even if the device is otherwise trusted And when attestation is unavailable Then the system proceeds without blocking but flags the device record as "attestation: unavailable" and includes it in risk evaluation
User and Admin Notifications on New Device Sign-In
Given a new or untrusted device sign-in occurs When primary authentication succeeds (before access is granted) Then send a notification to the user and designated admins within 60 seconds that includes: user, device model, OS, approximate location (city/region from IP), time (ISO 8601 UTC), and action links (review, revoke) And if the user or admin clicks revoke Then the device is immediately removed/blocked from the trusted list and active sessions from that device are invalidated within 30 seconds
Self-Service Trusted Device Rename and Revoke
Given a signed-in user navigates to Device Settings When they view Trusted Devices Then the list shows each device's nickname, model, OS, attestation status, last seen timestamp, and current trust status And when the user renames a device Then the new nickname is saved and shown across sessions within 5 seconds of confirmation And when the user revokes a device Then the device's refresh tokens are invalidated and any subsequent access from that device requires step-up; the revoke event is logged with actor, timestamp, and device identifier
Intermittent Connectivity and Low-End Device Constraints
Given a low-end Android device with intermittent connectivity attempts sign-in When network drops during step-up initiation or completion Then the UI preserves progress and retries automatically up to 3 times over 2 minutes, offering a manual retry without losing state And local storage used by device trust metadata on the client remains under 100 KB per user, and the app functions within 50 MB RAM during the step-up flow And when connectivity is restored Then pending step-up completes without requiring the user to re-enter primary credentials unless the session has expired
Audit Logging and Admin Reporting for Device Trust Events
Given the system processes device trust events When events occur (new device detected, step-up challenge issued/completed/failed, rename, revoke, material change) Then each event is logged with fields: user ID, device ID, event type, outcome, timestamp (ISO 8601 UTC), IP, and change reason where applicable And when an admin requests the Device Trust report for a date range Then the system returns exportable results (CSV/JSON) containing the logged fields and filters by user, device, event type, and outcome And test data across at least 1,000 events is returned within 5 seconds for a 30-day range on a standard admin account
Audit Trail & Compliance Reporting
"As a compliance officer, I want audit-ready records of MFA decisions so that we can pass audits and investigate incidents."
Description

Capture immutable, tamper‑evident logs for all risk decisions and MFA events, including timestamp, operation scope, risk band, factor used, device class (non‑PII), coarse location/IP, and policy version applied. Store with signed hashes and clock synchronization. Provide one‑click, audit‑ready reports integrated with CarePulse reporting (CSV/PDF) and export to SIEM via webhook/API. Enforce least‑data logging to avoid sensitive content while meeting HIPAA/SOC2 evidence needs. Include filters by user, device, factor, outcome, and time range. Expected outcome: transparent traceability for audits and incident response without compromising privacy.

Acceptance Criteria
Log Completeness for Risk Decisions and MFA Events
Given any risk decision or MFA event occurs in Step‑Up MFA When the event is persisted to the audit log Then the entry contains: timestamp (UTC ISO 8601 ms), user_id, operation_scope, risk_band, factor_used, device_class, coarse_location, source_ip, policy_version, outcome And Given a new device, unusual time, or elevated scope triggers step‑up When the event is logged Then risk_band reflects the calculated band and factor_used reflects the factor actually presented And Given schema validation runs When ingesting an audit entry Then all required fields are present and conform to allowed enumerations and formats And Given high throughput (>= 200 events/sec) When events are generated Then each entry is written within 2 seconds of event completion
Tamper‑Evident Storage and Integrity Verification
Given an audit log entry is created When it is stored Then a SHA‑256 hash of the entry and a prev_hash are recorded and the record is signed with signing_key version And Given an integrity verification job runs When verifying a time range Then the hash chain validates end-to-end and any break is flagged with severity=high And Given storage policies are enforced When attempting to update or delete an audit entry within the retention window Then the operation is rejected and the attempt itself is logged And Given signing key rotation When verifying entries across key versions Then signatures validate using the recorded key_version metadata
Clock Synchronization and Timestamp Accuracy
Given NTP is configured on all logging nodes When timestamps are generated Then timestamps are in UTC ISO 8601 with millisecond precision and node clock offset remains within ±500 ms And Given offset exceeds ±500 ms When logging an entry Then the entry is stamped with unsynced_clock=true and an alert is emitted within 1 minute And Given the primary time source is unavailable When logging continues Then a secondary time source is used and time_source=secondary is recorded without data loss
One‑Click Audit‑Ready Reports with Filters (CSV/PDF)
Given a Reporting role user selects a time range and filters (user, device_class, factor_used, outcome) When Generate Audit Report is clicked Then CSV and PDF are produced within 30 seconds for up to 1,000,000 events and are available in CarePulse Reporting And Given a generated report When inspecting its contents Then each row includes: timestamp, user_id, operation_scope, risk_band, factor_used, device_class, coarse_location, source_ip, policy_version, outcome And Given report metadata When opening the report header Then applied filters, generation timestamp, verification summary (hash chain result), and policy version range are displayed And Given CSV format When validating Then it conforms to RFC 4180 and the PDF contains selectable text
SIEM Export via Webhook/API
Given a SIEM webhook URL and HMAC secret are configured When new audit events are created Then events are pushed in batches within 10 seconds latency, each request HMAC‑SHA256 signed with key_id and timestamp And Given network or 5xx errors occur When delivery fails Then retries use exponential backoff for up to 24 hours; undeliverable batches are moved to a dead‑letter queue and an alert is sent And Given the SIEM returns 2xx When receiving a batch Then delivery is marked successful and no further retries occur And Given the SIEM API pull is used When querying /audit-events with filters (user, device_class, factor_used, outcome, time_range) Then results are paginated, ordered by timestamp desc, and support a next_cursor for continuation And Given duplicate deliveries When the same event is received Then idempotency is ensured via a stable event_id per entry
Least‑Data Logging and Privacy Safeguards
Given least‑data logging is required When capturing an audit entry Then no PHI/PII content, voice transcripts, free‑text notes, GPS coordinates, device identifiers, or exact addresses are stored And Given device and location fields When persisting device_class and location Then device_class is limited to OS family and form factor; coarse_location is limited to city and country or geohash precision 5; IPv4 is masked to /24 and IPv6 is masked to /48 And Given schema enforcement When an entry contains fields outside the approved schema Then ingestion is rejected and the violation is logged And Given periodic validation When running a DLP scan on a random sample of 10,000 recent entries Then zero violations are detected
Admin Policies & Analytics Dashboard
"As a tenant admin, I want to configure MFA policies and monitor outcomes so that I can balance security with caregiver productivity."
Description

Deliver a tenant-level admin UI to configure Step‑Up MFA policies: allowed factors by role, risk thresholds, operation-to-scope mappings, elevation TTL, push timeouts, and fallback allowances. Provide guardrails and previews to prevent lockouts and simulate impact. Offer analytics on prompt frequency, success rates, average latency, factor adoption, and anomalies, with drill-down to user/device level. Support per-environment settings and versioned policy changes with rollback. Expected outcome: admins can tune security to match caregiver workflows, improving protection while minimizing friction.

Acceptance Criteria
Role-Based Factor Configuration with Guardrails
Given a tenant admin with Security Admin permissions is on the Policies page for environment E When the admin configures allowed primary factors (biometric, push, hardware key) and optional fallback allowances by role Then the Save action is enabled only if every role retains at least one primary factor or a configured fallback that is compatible with ≥95% of known active devices for that role And a lockout guardrail blocks Save if any role would have 0 available factors on ≥1% of known active devices, displaying a list of affected roles and device types And a preview panel updates in real time to show estimated user coverage by factor, segmented by role and device capability And the change summary (role, factors toggled, guardrail outcomes) is recorded in the audit log with admin ID, timestamp, and environment
Risk Threshold Tuning with Impact Simulation
Given the admin adjusts risk thresholds (e.g., new device, unusual time, geo-velocity) and selects operation-to-scope mappings for environment E When the admin runs an impact simulation over the last 30 days of tenant events Then the system estimates per-role prompt frequency, success rate, average latency, and predicted false-positive rate, and displays the top 3 affected operations And the simulation completes within 5 seconds for datasets up to 100k events And if predicted prompts exceed 3 per user per day for any role, the system requires explicit confirmation with a friction warning before Save is allowed And the final saved thresholds are stored and versioned with a diff against the prior policy
Operation-to-Scope Mapping and Elevation TTL Enforcement
Given the admin maps specific operations (e.g., view PHI, edit policies, export reports) to elevated scopes and sets elevation TTL per scope for environment E When a user initiates a mapped operation without a valid elevation in the current TTL window Then Step‑Up MFA is required and the reason code includes scope, operation, and TTL expiration And if the user was recently elevated and TTL has not expired, no additional prompt is triggered And changes to mappings/TTLs appear in policy preview and are auditable with versioning metadata
Push Timeout and Fallback Allowance Policies
Given the admin configures push approval timeout (e.g., 15–90 seconds), max retries, and allowed fallback factors per role for environment E When a user does not approve a push within the configured timeout Then the client is offered the configured fallback factors in priority order without restarting the original operation And analytics record timeout occurrences, retries, and fallback factor used with timestamps and device identifiers And policy Save is blocked if no fallback is available for roles where push is the only primary factor And the effective timeout and fallback behavior are reflected in the policy preview
Analytics Overview with Drill‑Down and Export
Given the admin opens the Analytics dashboard and selects a date range and environment E When viewing metrics Then the dashboard displays prompt frequency, success rate, average latency, factor adoption by role, and anomaly alerts (e.g., 3σ spikes) with trend lines And filters are available for role, factor type, operation/scope, device platform, and location, applying within 2 seconds for result sets up to 100k events And drill‑down to user and device level shows a chronological timeline of prompts with factor used, outcome, latency, reason code, device model/OS/app version And CSV export of the currently filtered view is available, completes within 10 seconds for up to 50k rows, and redacts PII according to admin’s data access permissions
Versioned Policy History with One‑Click Rollback (Per‑Environment)
Given policies are edited in environment E by an authorized admin When the admin saves changes Then a new immutable policy version is created with version ID, timestamp, admin ID, and change summary, and appears in History with a human‑readable diff And selecting Rollback on a prior version restores that version atomically, records the rollback event in the audit log, and triggers guardrail checks before activation to prevent lockout And policy versions and rollbacks are isolated per environment; editing or rolling back in E does not affect other environments And after rollback, the effective policy in E matches the restored version byte‑for‑byte (hash equality)

Access Ledger

A real‑time, human‑readable timeline of who accessed what, when, from where, and under which scope or override. Includes filters, anomaly highlights, and one‑click, audit‑ready exports for payers and state reviews. Gives Compliance Sentinels and Agency Principals instant visibility and simplifies audit prep.

Requirements

Real-time Event Ingestion & Normalization
"As a Compliance Sentinel, I want every access across CarePulse to be captured in a consistent, timely format so that I can reliably review who did what without gaps or ambiguity."
Description

Implement a low-latency event pipeline that captures and normalizes all access events across mobile apps, web, APIs, background jobs, and integrated IoT sensors into a unified schema. Each event must include actor identity, role/scope at time of access, resource type and identifier, action (read/write/export/delete), timestamp (UTC), IP/device fingerprint, geolocation (if permitted), session/correlation IDs, result (success/failure), and override/justification metadata. Ensure at-most-once display with exactly-once storage semantics, sub-2s end-to-ledger latency for 95% of events, and durable, append-only persistence. Integrate with CarePulse identity and RBAC so scope is resolved at event time, and mask PHI fields while preserving human-readable context in the UI.

Acceptance Criteria
Unified Event Schema Completeness and Normalization
Rule-Oriented: - Every persisted event contains non-empty: actor_id, role_scope_at_access, resource_type, resource_id, action ∈ {read, write, export, delete}, timestamp_utc (ISO 8601 Z), session_id, correlation_id, result ∈ {success, failure}, and at least one of {ip_address, device_fingerprint}. - If geolocation_permission=false, geolocation is null; if true and provided, geolocation is stored as (lat, lon) with 5-decimal precision. - If override_flag=true, override_type and justification_text are non-empty; if override_flag=false, both are null. - Timestamps are normalized to UTC and stored as both iso_utc and epoch_ms. - Field names and enumerations are consistent across sources (mobile, web, API, jobs, IoT).
Cross-Source Ingestion Coverage (Mobile, Web, API, Jobs, IoT)
Given authenticated clients for mobile, web, public API, background job, and IoT sensor When each source emits a valid access event with required fields Then the pipeline ingests, normalizes, and persists each event and it is visible in the Access Ledger with correct source attribution. And events belonging to the same logical request share a correlation_id; distinct sessions have distinct session_id values. And for IoT-originated events, actor_id represents the device identity and role_scope_at_access resolves to the supervising service scope.
End-to-Ledger Latency SLO (P95 < 2s)
Given a sustained workload of up to 100 events/second for 10 minutes with ≤1% packet loss When measuring end-to-ledger latency from source timestamp to first appearance in the Access Ledger list Then P95 latency ≤ 2 seconds and P99 latency ≤ 5 seconds. And latency metrics are emitted with percentiles and alerts fire if P95 > 2s for 5 consecutive minutes.
Exactly-Once Storage and At-Most-Once Display
Given the same event is submitted 3 times within 10 minutes with identical event_id and idempotency_key When the pipeline processes these submissions Then storage contains exactly one persisted event record. And the Access Ledger UI lists the event once across refreshes and pagination. And replay/reprocessing of the event stream does not create duplicates. And deduplication does not drop legitimately distinct events with different event_id or payload hash.
Durable Append-Only Persistence and Immutability Controls
Rule-Oriented: - No API or admin UI allows UPDATE or DELETE of persisted events; attempts return 405/Forbidden and are audited. - Storage layer enforces append-only semantics; direct mutations are blocked by permissions or constraints. - Acknowledged writes survive process crash and single-node failure; post-failover consistency shows no lost acknowledged events. - Event identifiers are globally unique (UUIDv4) and indexed for integrity and retrieval.
RBAC Scope Resolution and Override Capture at Event Time
Given an actor’s role changes after an access event occurs When querying the stored event Then role_scope_at_access reflects the role at event time, not the current role. And when access is performed under an approved override Then override_flag=true and justification_text and override_type are populated. And result reflects the authorization outcome (success for permitted, failure for denied). And for service/API tokens, scope is derived from token claims resolved at event time.
UI PHI Masking with Context Preservation
Given an event whose context includes PHI fields (e.g., patient name, DOB, address, MRN) When viewing the event in the Access Ledger UI Then PHI values are masked server-side (e.g., name → initials, DOB → year, MRN → last 4) and not sent unmasked to the client. And resource_type and resource_id remain fully visible and searchable to preserve human-readable context. And screenshots/exports initiated from the UI reflect the masked values.
Human-Readable Access Timeline UI
"As an Agency Principal, I want a readable, live access timeline so that I can quickly understand recent activity without deciphering technical logs."
Description

Deliver a mobile-first, real-time timeline that renders access events in clear, natural language with recognizable actor avatars, resource names, action icons, and relative/absolute timestamps. Provide infinite scroll, sticky summary header (selected filters, counts, time window), and an expandable event details drawer showing full context and raw JSON when needed. Support quick actions (copy event ID, open resource, flag event) and accessibility (WCAG AA, large text, high contrast). Maintain P95 timeline load under 500 ms for the last 24 hours of activity, with live updates via websockets/server-sent events.

Acceptance Criteria
Render Human-Readable Event Rows on Mobile
Given an access event with actor, resource, action, timestamp, location, and scope/override, When the timeline loads on a mobile viewport (375–430 px width), Then each row displays actor avatar (fallback initials if missing), actor name, action icon+verb, resource name/type, relative timestamp for <24h and absolute timestamp otherwise, and a brief location summary. Given the user toggles timestamp format, When the toggle is activated, Then all visible timestamps switch between relative and absolute formats within 200 ms and remain consistent during live updates. Given long actor or resource names, When content exceeds its container, Then text truncates with an ellipsis and the full value is available via long‑press and screen-reader label without layout shift.
Infinite Scroll with Fast Initial Load
Given the last 24h dataset (≤20k events, median payload ≤1 KB), When a user opens the timeline, Then P95 time-to-interactive for the initial 50 events is ≤500 ms over 200 measured loads on a mid‑tier mobile device and typical network. Given the user scrolls within 200 px of the end of the list, When the next page is requested, Then the next 50 events append within 300 ms P95, no duplicates or gaps appear, and scroll anchoring is preserved with a visible skeleton loader until data arrives. Given a pagination network error, When it occurs, Then a non-blocking error banner with a Retry action is shown and a successful retry appends the correct page without jumping the scroll position.
Sticky Summary Header with Filters, Counts, and Time Window
Given the user scrolls the timeline, When the list content moves, Then a sticky header remains fixed showing selected filter chips, visible time window, and counts in the format “Filtered N of M events”. Given the user applies or removes filters via chips or the filter panel, When changes are applied, Then the header updates counts and chip values within 200 ms, the list scrolls to top, and a screen-reader announcement summarizes the new result count. Given the “Anomalies only” filter is toggled, When applied, Then only events with anomaly=true are shown with a highlight badge and the header count equals the anomaly count. Given the user taps Clear All, When executed, Then all filters are removed, the header shows “All events”, and the count equals total M.
Event Details Drawer with Raw JSON
Given an event row, When the row is tapped, Then a bottom drawer opens within 200 ms showing actor, resource, action, absolute timestamp with timezone, IP/device, location, scope/override, request/event IDs, and links to open the resource. Given the Raw JSON tab is selected, When displayed, Then the event payload is shown in a monospaced, selectable view with sensitive fields redacted per policy and supports Copy All. Given the drawer is dismissed by swipe or Close, When closed, Then focus returns to the originating row and the list position is unchanged.
Quick Actions: Copy ID, Open Resource, Flag Event
Given an event row or its details drawer, When Copy Event ID is tapped, Then the event ID is copied to clipboard and a confirmation toast appears within 1 s. Given Open Resource is tapped, When navigation occurs, Then the corresponding resource detail opens in the same workspace with back navigation returning to the same scroll position and highlighted row. Given Flag Event is tapped and a reason is provided, When confirmed, Then the event shows a Flagged badge in the list, appears in the Flagged filter, and an audit entry is created with actor, timestamp, and reason.
Live Updates via WebSocket/SSE
Given the timeline is open, When new events are emitted, Then a New events indicator appears within 2 s and tapping it prepends the new events without resetting the user’s current scroll context. Given the real-time connection drops, When retries are attempted, Then the client reconnects with exponential backoff and falls back to 15 s polling after 3 failures without inserting duplicates. Given filters are active, When live events arrive, Then only events matching current filters appear and header counts update accordingly.
Accessibility and WCAG AA Compliance
Given WCAG 2.1 AA, When evaluated, Then text and essential icons meet contrast ≥4.5:1 (text) and ≥3:1 (icons), interactive targets are ≥44×44 dp, and dynamic type up to 200% preserves content without clipping essential information. Given a screen reader is enabled, When navigating the list, Then each row announces actor, action, resource, timestamp, and flagged/anomaly status as a single label; header chips and drawer controls have correct roles, names, and logical focus order. Given keyboard or switch control is used, When interacting, Then all actions (open details, copy ID, open resource, flag, filter, toggle timestamps) are reachable without gesture-only interactions and focus is never trapped.
Advanced Filters & Saved Views
"As a Compliance Sentinel, I want powerful filters and saved views so that I can isolate relevant events quickly and reuse common audit queries."
Description

Provide multi-facet filtering and search across actor, role/scope, resource type, patient/client, action, date/time range, location/IP/device, outcome, override flags, and anomaly tags. Include free-text search over actor and resource names, plus prefix search on IDs. Enable combinable filters with AND/OR, time bucketing, quick presets (Last 24h, Shift Hours, Last Audit), and the ability to save and share named views with permissions. Ensure filter operations and pagination are performant on datasets up to 1M events with server-side query execution.

Acceptance Criteria
Combine Multi‑Facet Filters with AND/OR and Grouping
Given a ledger containing events across actor, role/scope, resource type, patient/client, action, outcome, override flags, anomaly tags, and location/IP/device When a user applies role=RN AND action=View Then only events matching both predicates are returned and the total results count matches the server-reported count Given the same dataset When a user applies outcome=Denied OR override_flag=true Then the result set includes any event matching either predicate and excludes events matching neither Given three logical groups When the user builds (role=RN AND action in [View, Edit]) OR (patient_id starts with "P-12") AND (resource_type=CarePlan) Then the result set equals the truth table of the composed expression and no evaluation errors occur Given an invalid expression (e.g., unbalanced parentheses) When the user attempts to apply it Then the system blocks execution, displays a validation message, and leaves the previously applied filters intact
Free‑Text Search and ID Prefix Matching
Given events with actor_display_name and resource_display_name populated When a user enters a case-insensitive substring (e.g., "maria lopez") in free-text search Then events whose actor_display_name or resource_display_name contain the substring are returned Given events with actor_id and resource_id values (e.g., "USR-00421", "CP-1007A") When a user types an ID prefix (e.g., "CP-100") in the ID search field Then only events with IDs beginning with that prefix are matched Given active facet filters and a free-text or ID prefix query When the query executes Then results satisfy both the text/prefix search and all selected filters Given the free-text field is cleared When the user removes the query Then text and prefix constraints are removed from the filter state
Date/Time Range, Bucketing, and Quick Presets
Given a user selects an absolute date/time range When applying start=2025-09-01T00:00 and end=2025-09-02T00:00 in the app's displayed timezone Then only events with timestamps in [start, end) are returned Given a time bucket option is available (minute, hour, day) When the user selects "hour" Then the system aggregates counts per hour for the currently filtered result set and displays the correct histogram totals Given quick presets are provided (Last 24h, Shift Hours, Last Audit) When the user selects a preset Then the corresponding date/time range is applied: Last 24h = now-24h..now; Shift Hours = the agency-configured shift window for the current day; Last Audit = the start/end timestamps of the most recent audit period Given the user switches between presets and a custom range When each is applied Then the filter UI reflects the selection and results update accordingly without stale ranges persisting
Filter by Outcome, Override Flags, Anomaly Tags, and Location/IP/Device
Given events include outcomes {Success, Denied, Error} When filtering by outcome=Denied Then only events with Denied outcome are returned Given events include override_flag true/false When filtering override_flag=true Then only events with override_flag=true are returned Given events include anomaly tags (e.g., GeoMismatch, UnusualTime) When filtering anomaly_tags includes GeoMismatch Then only events tagged GeoMismatch are returned Given events include IP addresses When filtering for an exact IP (e.g., 203.0.113.55) Then only events with that IP are returned Given events include IP ranges When filtering for a CIDR block (e.g., 10.0.0.0/24) Then only events whose IP falls within that range are returned Given events include device metadata (e.g., device_type=mobile/web) When filtering device_type=mobile Then only events captured from mobile devices are returned
Save, Load, and Share Named Views with Permissions
Given a user has applied filters, search, logic groups, date range, bucket, and sort When the user saves the configuration as a named view (e.g., "Night Shift Overrides") Then the view appears in the user's Saved Views and reloading it restores the exact state Given the owner shares a view with a role (e.g., Compliance Sentinels) as view-only When a member of that role opens the view Then they can load and use it but cannot overwrite it; attempts to save require "Save As" Given a private view When a non-permitted user attempts to access it (including via a direct link) Then access is denied and the view is not listed for that user Given an owner updates or deletes a saved view When changes are confirmed Then shared users see the updated definition on next load or lose access upon deletion
Server‑Side Query Execution and Pagination Performance at 1M Events
Given a dataset with 1,000,000 events in the ledger When executing any supported filter combination that returns a first page of 50 items Then the server responds within P95 <= 1.5s and P99 <= 2.5s, and the UI renders without client-side timeouts Given pagination controls with page sizes 25, 50, 100 and default sort timestamp desc When requesting subsequent pages for the same query Then P95 server response per page is <= 800ms and ordering remains stable (timestamp desc, tie-breaker event_id) Given network inspection of the client during filtering and pagination When a query is executed Then only server-side filtered endpoints are called (no full-dataset downloads), and each page payload is limited to the selected page size plus minimal metadata Given a query exceeds configured timeouts When it is terminated Then the user sees a clear timeout message with retry option and the UI remains responsive
Anomaly Detection & Highlighting
"As a Compliance Sentinel, I want anomalous access to be automatically highlighted so that I can prioritize review and intervene faster."
Description

Implement rule-based anomaly detection to flag events such as after-hours access outside assigned shift, access from new or distant geolocation/IP, repeated failed logins, bulk record viewing, export surges, and overrides without justification. Assign severity levels, visually highlight anomalies in the timeline, and provide an explanation and evidence for each flag. Allow admins to tune thresholds, whitelist known devices/locations, and mute specific rules per user or resource. Store anomaly tags with events for filtering and reporting.

Acceptance Criteria
Flag After-Hours Access Outside Assigned Shift
Given a user has an assigned shift [S,E] and local timezone Z And an access event by that user is recorded at timestamp T When T is outside [S,E] by more than the configurable grace period (default 10 minutes) Then the rule "after_hours_access" is triggered And the event is tagged with anomaly_id, rule="after_hours_access", severity=Medium if |T−nearest_boundary| <= 120 minutes else High And the anomaly explanation lists user_id, shift window, event timestamp, and time delta And the anomaly evidence references event_id, resource_id, schedule_id, and normalized timezone And the timeline displays a severity-colored highlight and icon on the event
Detect New or Distant Geolocation/IP
Given the system stores each user's last-30-day known device fingerprints, IPs, ASNs, and geolocation clusters And an access event arrives with source IP P, device fingerprint F, and geolocation G When P/ASN or G is not in the user's known sets OR distance(G, nearest_known) > D_km (default 100 km) And neither P, F, nor G are whitelisted for the user Then the rule "new_or_distant_location" is triggered with severity=Medium if distance <= 500 km else High And the anomaly explanation includes last_known_city/country, computed distance, and ASN change (if any) And the anomaly evidence includes source_ip, device_fingerprint, resolved_geo, and sample prior_event_ids And the timeline highlights the event and stores the anomaly tag with the event
Flag Repeated Failed Login Bursts
Given authentication logs capture failed_login events with username and source IP When the count of failed_login events for the same username reaches N within M minutes (defaults N=5, M=10) Then the rule "failed_login_burst" is triggered once per sliding window And severity=High if count >= 2N else Medium And the anomaly explanation includes total_count, time_window, and first/last attempt timestamps And the anomaly evidence lists attempt_event_ids and distinct source IPs/geolocations And the timeline shows an aggregated anomaly marker linked to the underlying attempts
Detect Bulk Record Viewing and Export Surges
Given access and export events include user_id, resource_id/type, and timestamps When a user views V unique patient records within Y minutes where V >= threshold_view (default 50 in 15) Or a user exports E records within Y minutes where E >= threshold_export (default 100 in 15) Then the rule "bulk_access" is triggered with severity=Medium if under 2x threshold else High And the anomaly explanation includes counts (V and/or E) and window size And the anomaly evidence references involved event_ids (capped to 100), resource types, and export_job_id (if present) And all involved events are tagged with the same anomaly_id and highlighted on the timeline
Flag Overrides Without Required Justification
Given an "Emergency Override" action requires justification_text (>=20 chars) and ticket_id (pattern) unless whitelisted When an override is used to access a restricted resource and required fields are missing or invalid Then the rule "override_without_justification" is triggered with severity=High And the anomaly explanation enumerates which fields failed validation And the anomaly evidence includes override_id, event_id, user_id, timestamp, and validation results And the event is highlighted and tagged for filtering and reporting
Admin Tuning: Thresholds, Whitelists, and Rule Mutes
Given an admin with role Compliance Sentinel or Agency Principal opens Anomaly Settings When they update rule thresholds, add/remove whitelist entries (device fingerprint, IP/CIDR, location) per user, or mute specific rules per user/resource with optional expiry Then changes are validated (types, ranges, conflicts), versioned, and persisted with an audit log entry And new configurations take effect for evaluations within 60 seconds And whitelisted sources bypass related rules; muted rules do not tag events within the specified scope and period And previous raw events remain unchanged; future events reflect the updated behavior
Filter, Highlight, and Export Anomalies
Given events carry persisted anomaly tags (rule, severity, anomaly_id) When a user filters the Access Ledger by anomaly presence, rule, severity, user, resource, or date range Then only matching events are displayed and timeline highlights anomalies with severity-specific color/icon and tooltip And results return within 2 seconds for datasets up to 10,000 events And one-click export produces CSV and PDF including anomaly fields (rule, severity, explanation, evidence refs, timestamps) matching the current filter And exported files include a generated report id and timestamp for audit traceability
Scope, Delegation & Override Capture
"As a Security Administrator, I want each access event to show the exact scope and any overrides with justification so that compliance reviews can verify proper authorization."
Description

Record the precise authorization context for each access, including current role, effective permissions, delegation or impersonation source, and any break-glass/override with required reason codes and optional attachments. Enforce inline capture of justification when an override is triggered and link to approver workflow when policy requires approval. Display this context prominently in event details and include it in exports to satisfy payer and state audit requirements.

Acceptance Criteria
Authorization Context Capture on Access
Given a user successfully accesses a protected CarePulse resource (e.g., patient record, visit note, schedule) When access is granted under normal, delegated, impersonated, or override mode Then the event record includes: event_id, UTC timestamp, actor_user_id, target_resource_id, actor_current_role, effective_permissions (list), access_scope, access_mode, delegation_source (user_id and basis_id if applicable), override_flag (boolean), override_reason_code (if override), attachment_ids (0..3, optional), approval_request_id (if required), record_integrity_hash (SHA-256) And the event record is append-only; any correction is stored as a new event referencing prior_event_id And 100% of a sample of ≥100 new events contain all mandatory fields with non-null values
Enforce Inline Justification for Break‑Glass Overrides
Given a user initiates a break-glass override When they attempt to proceed to the requested resource Then a modal requires selection of a reason code from the active code set and blocks proceed until selected And the modal allows optional free-text justification (0–500 chars) and optional attachments (0–3 files; PDF/JPG/PNG; ≤10 MB each) And upon submission, override_reason_code, justification_text (if any), attachment metadata (filename, size, hash), and submitter_user_id are persisted on the access event before access continues And if policy = Pre-Approval Required, access remains blocked until an approver approves; if policy = Emergency Override, access proceeds and an approval request is created in Pending status linked to the event
Validate and Capture Delegation/Impersonation Context
Given an acting user is operating under delegation or impersonation When they access any protected resource Then the system validates the delegation token is active (current time within start/end, not revoked) and scope covers the requested action/resource And the event records acting_user_id, on_behalf_of_user_id, delegation_source_type, delegation_basis_id, delegation_scope, and delegation_expiration And if validation fails, access is denied with error "Delegation expired or insufficient scope", no protected data is returned, and a failed audit event is recorded with failure_reason
Prominent Context Display in Access Ledger Event Details
Given a compliance user opens an event detail in the Access Ledger on a 360×640 viewport When the detail loads Then the Context Summary section appears in the first visible screen without scrolling and shows: Role, Effective Permissions, Scope, Mode (normal/delegation/impersonation/override), Delegation Source (if any), Override Reason (if any), Approval Status/ID (if any), Attachment Count And the Approval ID (if present) is a clickable link that opens the approval record And displayed values match the stored audit record exactly And the event detail renders to interactive state within 1.0s at p95 over the most recent 24h data set (≥10k events)
Audit‑Ready Export Includes Authorization Context
Given a user exports Access Ledger events for a selected date range and filters When the export completes Then CSV and PDF contain columns: event_id, timestamp_utc, actor_user_id, target_resource_id, actor_current_role, effective_permissions, access_scope, access_mode, delegation_source_user_id, delegation_basis_id, override_flag, override_reason_code, approval_request_id, approval_status, attachment_count, record_integrity_hash And the export respects all applied filters and timezone normalization to UTC And CSV export of up to 100,000 events completes within 30s at p95; PDF export of up to 5,000 events completes within 45s at p95 And attachments are not exported; when "Include attachment links" is selected, time-limited URLs are included and valid for ≥24h
Policy‑Driven Approval Workflow Linkage
Given an override occurs under a policy that requires approval When the user submits justification Then an approval request is created with status Pending, approver group per policy, and due_at set by policy SLA And the access event stores approval_request_id and approval_status And approvers are notified within 60s, and their decision updates the linked event’s approval_status within 5s of action And If Denied, the event is tagged "Override Denied" and a compliance follow-up task is created; If Approved, the event is tagged "Override Approved" And all approval state changes are logged as separate audit entries with previous and new values
Audit-Ready Export & Immutable Receipts
"As an Agency Principal, I want to export audit-ready access logs with verifiable integrity so that I can respond to payer and state reviews confidently and quickly."
Description

Provide one-click exports (CSV and paginated PDF) that respect current filters, include column dictionary, timezone note, and export metadata (requestor, timestamp, filter snapshot). Generate a cryptographic receipt (hash of export contents and parameters) and optionally sign with the agency’s key for immutability verification. Support batched exports up to 50,000 events per file with progress indicator, download history, and watermarks indicating confidentiality. Offer redaction options to exclude sensitive fields where allowed.

Acceptance Criteria
One-Click Filtered CSV Export with Metadata and Dictionary
Given I am a logged-in Compliance Sentinel or Agency Principal viewing the Access Ledger with active filters And I have permission to export audit data When I click "Export" and choose "CSV" Then the download begins and produces a CSV data file containing only the events matching the current filter snapshot at the moment of request And the export includes a column dictionary describing every exported column (name, label, description, data type) And the export includes metadata capturing requestor ID, requestor name, request timestamp in UTC and local timezone, the exact filter snapshot, and the timezone note applied to all timestamps And the CSV timestamps reflect the noted timezone consistently And the export completes without error for datasets up to 50,000 events within a single file
Paginated PDF Export with Watermark and Appendices
Given I am viewing the Access Ledger with active filters And I have permission to export audit data When I click "Export" and choose "PDF" Then a paginated PDF is generated containing only events matching the current filter snapshot And each page displays a confidentiality watermark and a footer noting the export timestamp, page number, and timezone used And the PDF includes an appendix with the column dictionary and the export metadata (requestor, timestamp, filter snapshot, timezone note) And any redacted fields are labeled "REDACTED" consistently throughout the document
Cryptographic Receipt Generation and In-App Verification (Hash)
Given a CSV or PDF export is initiated When the export completes Then a cryptographic receipt is generated including the algorithm (SHA-256), canonicalized parameters (format, filter snapshot, redaction options, timezone), and the digest of the exported content plus parameters And the receipt is provided with the download and stored in Download History And using the in-app "Verify Receipt" action on the downloaded file recomputes the digest and shows "Valid" when unchanged and "Invalid" when altered
Optional Agency Key Signature and Signature Verification
Given an agency signing key is configured and verified When an export completes Then the system creates a detached digital signature over the receipt using the agency key And the signature artifact is included with the download and recorded in Download History And the in-app "Verify Signature" action validates the signature against the stored public key, reporting success or failure And when no agency key is configured, no signature is produced and the UI indicates the export is unsigned
Batched Exports up to 50,000 Events per File with Progress and History
Given the current filter returns more than 50,000 events When I start a CSV export Then the system splits the data into multiple files, each containing at most 50,000 events while preserving sort order And a progress indicator displays percentage complete and the current part being generated (e.g., Part 2 of 4) And upon completion, a single downloadable package contains all data parts, one column dictionary, one export metadata file, and one receipt (and signature if applicable) covering the entire package And a Download History entry appears with requestor, timestamp, format, filter snapshot, total events, number of files, receipt hash, and signature status
Redaction Options with Clear Audit Trace
Given I have export permissions and redaction options are available per policy When I select redaction options and confirm the export Then the exported data excludes the selected sensitive fields according to the chosen redaction mode (omit column or value labeled "REDACTED") And the export metadata lists all fields redacted And the cryptographic receipt (and signature if present) are computed over the redacted content and parameters And the Download History entry indicates that redactions were applied
Secure Retention & Tamper Evidence
"As a Compliance Sentinel, I want the access ledger to be securely retained and tamper-evident so that it can serve as a trustworthy source during audits and investigations."
Description

Apply configurable retention policies aligned to regulatory needs, with encryption in transit and at rest, role-based access controls to the ledger, and immutable, append-only storage with chained hashes to detect tampering. Provide administrative retention settings with audit trail of changes, legal hold capability, and export of integrity proofs for a given time window. Ensure time synchronization and signed server timestamps to strengthen evidentiary value.

Acceptance Criteria
Retention Policy Configuration, Enforcement, and Change Logging
Given an Organization Admin defines a retention policy of 7 years for Access Ledger entries in the Production environment When the policy is saved Then the system validates allowed range (1–10 years), requires a reason, version-increments the policy, and records an immutable change log entry capturing previous values, new values, actor, timestamp, and justification Given the above policy is active When the scheduled retention job runs at 02:00 UTC daily Then all eligible entries older than 7 years are queued and permanently purged within 24 hours, and a purge summary entry is appended including count purged, oldest/newest purged timestamps, policy version, and actor=system Given entries exist that are newer than the retention threshold When the job runs Then no entries newer than the threshold are purged
Legal Hold Prevents Destruction
Given a Compliance Sentinel applies a legal hold with ID LH-123 to ledger entries for 2023-05-01T00:00:00Z..2023-06-01T23:59:59Z with reason "Pending payer audit" When a purge job or manual delete targets any entries under LH-123 Then deletion is blocked, the API returns HTTP 423 Locked, and an immutable log entry records the blocked attempt with actor, timestamp, and hold ID Given LH-123 is lifted by an Agency Principal with required reason When the hold is removed Then a hold-release entry is appended linking to the original hold, and the next retention job includes those entries if they meet age criteria
Immutable Append-Only Ledger with Hash Chain
Given a new ledger event is recorded When it is appended Then the entry includes prev_hash referencing the prior entry in sequence, entry_hash computed over canonicalized content, and chain_height incremented by 1 Given an attempt is made to update or delete an existing entry via any API When the request is processed Then the system rejects the request with HTTP 409 Conflict and no data is changed Given an integrity verification is requested for a contiguous range When GET /ledger/verify?from=T1&to=T2 is called Then the API returns status=ok with start_hash, end_hash, and aggregate_root, or status=fail with the first failing index and reason if any link is broken
Exportable Integrity Proofs by Time Window
Given a Compliance Sentinel requests integrity proofs for 2024-01-01T00:00:00Z..2024-01-31T23:59:59Z When Export Proofs is invoked Then the system generates within 60 seconds a downloadable package (.zip) containing: proof.json (start_hash, end_hash, aggregate_root, policy_version, time_source), proof.sig (server signature), and README with verification steps, and provides a pre-signed URL that expires in 24 hours Given the package is verified using the current public keys When verify_proof is executed Then the signature validates and recomputed hashes match the reported values for that window
Signed and Synchronized Timestamps
Given system time is synchronized via NTP with NTS enabled against at least two trusted sources When drift exceeds ±100 ms relative to quorum Then alerts are raised to Ops, new ledger writes are paused, and the /health endpoint reports time_sync_out_of_bounds=true until recovered Given a ledger entry is created When it is persisted Then the entry includes an RFC3339 UTC server_timestamp and a detached signature over {server_timestamp, entry_hash, chain_height}, verifiable against keys served by GET /public-keys
RBAC-Gated Ledger and Settings Access
Given role mappings are configured (Caregiver, Operations Manager, Compliance Sentinel, Agency Principal, Org Admin) When users attempt actions Then: Caregivers cannot view the Access Ledger or retention settings (HTTP 403, reason=insufficient_scope); Operations Managers may view ledger but cannot export proofs or change retention; Compliance Sentinels may view ledger and export proofs; Agency Principals and Org Admins may view/export and modify retention settings; all denials and approvals are logged with actor, role, scope, and timestamp Given a user is granted a time-bound override scope "audit:export" for 4 hours When the window expires Then access reverts automatically and the expiration is logged
Encryption In Transit and At Rest with Key Management
Given any client connects to ledger APIs When TLS negotiation occurs Then the connection uses TLS 1.2+ with strong ciphers, HSTS is enabled, and noncompliant requests are rejected with HTTP 400/426 Given ledger data is stored When data-at-rest configuration is inspected Then all ledger tables and proof artifacts are encrypted with AES-256 via the platform KMS, keys are tenant-scoped, rotated at least every 90 days, and key usage/audit logs can be produced on request to Org Admins

Adaptive Triggers

Detects likely documentation misses in real time using voice‑note transcripts, EVV stamps, plan‑of‑care rules, and payer‑specific checks. Surfaces a bite‑sized tip only when needed, right where the caregiver is working, so issues are fixed in‑flow and end‑of‑shift rework drops.

Requirements

Real-time Multisource Rule Engine
"As a caregiver, I want the system to detect likely documentation misses in real time so that I can correct them before I leave the visit."
Description

Implement a low-latency evaluation service that ingests voice-note transcripts, EVV time/GPS stamps, plan-of-care tasks, and payer policy metadata to detect likely documentation misses in under 300 ms and emit structured “trigger” events. The engine must support declarative rules (JSON/YAML), rule chaining, severity levels, per-agency feature flags, and suppression logic to avoid duplicate nudges. It integrates with CarePulse’s mobile SDK and backend event bus, processes streaming updates (e.g., incremental transcript tokens), and publishes outcomes to the in-app nudge layer and audit log. It must be horizontally scalable, fault-tolerant, and operate in offline-degraded mode by queueing local evaluations until connectivity resumes.

Acceptance Criteria
Sub-300ms Real-Time Trigger on Incremental Transcript Under Load
Given the engine is receiving ≥2,000 streaming updates per second across ≥5,000 concurrent active visits and required contexts (EVV, plan-of-care, payer metadata) are locally cached When an incremental transcript token arrives for a visit with at least one matching rule Then the engine evaluates applicable rules and publishes a trigger event within 300 ms at p95 and within 500 ms at p99, measured from SDK receipt to event bus publish timestamp And the success rate for evaluations and publishes is ≥99.9% during the test window And each emitted trigger includes a correlation_id enabling end-to-end latency measurement
Declarative Rule Pack Loading and Hot-Reload
Given a ruleset in JSON or YAML that conforms to schema v1.x with unique rule_ids, defined severities, and valid chaining references When the ruleset is pushed via the control plane or a feature-flag change requires activation Then the engine validates and atomically activates the new ruleset within 2 seconds without restart or rejects it with a structured error indicating field and line And on rejection, the previously active ruleset remains in effect and is reported as active And the active ruleset version_id is included in emitted trigger payloads
Rule Chaining and Severity Resolution
Given a chain where Rule A -> Rule B -> Rule C with defined severities and dependency order When Rule A evaluates to true based on current evidence Then Rules B and C are evaluated within the same evaluation window with deterministic ordering and no partial updates And the emitted trigger contains the highest severity among satisfied rules within the chain and an evaluation_trace listing fired, skipped, and unmet rules And cycles in chaining are detected and blocked with an error logged; evaluation proceeds for acyclic rules
Duplicate Nudge Suppression
Given a rule condition remains true across multiple identical evidence updates for the same visit and caregiver When the engine receives redundant evidence that does not change the evaluation outcome Then no additional trigger is emitted for that rule for that visit for 5 minutes or until an input change causes the rule to transition false then true And suppression scope is {agency_id, visit_id, caregiver_id, rule_id, rule_version} and persists across process restarts
Per-Agency Feature Flags and Payer Policy Context
Given agency-level feature flags and payer policy metadata are available for a visit When a flag or payer context is toggled to enable or disable a rule group Then rule activation state changes take effect within 30 seconds at p95 and are reflected in subsequent evaluations and emitted payloads And agency-level overrides take precedence over defaults and are auditable with user_id and timestamp
Offline-Degraded Mode with Local Queue and Replay
Given the mobile device is offline while a caregiver documents a visit When a local evaluation produces a trigger Then the trigger is evaluated using the last-synced rules and context, displayed in the SDK nudge layer, and queued encrypted-at-rest with an idempotency key And upon reconnection, queued events are replayed in FIFO order to the backend within 10 seconds at p95 with at-least-once delivery and backend de-duplication by idempotency key And if the local queue exceeds 10,000 events or 72 hours, the SDK surfaces a backpressure warning and emits telemetry
Event Bus and Audit Log Publishing & Idempotency
Given a trigger is generated by the evaluation engine When publishing to the event bus and audit log Then the payload includes trigger_id, rule_id, rule_version, severity, visit_id, caregiver_id, agency_id, payer_id, timestamp (UTC ISO-8601), source_evidence refs, correlation_id, feature_flag_state, and evaluation_trace And the event bus publish succeeds with at-least-once semantics and the audit log write completes within 1 second at p95, with retries using exponential backoff up to 5 minutes And downstream consumers can enforce idempotency using the idempotency key to avoid duplicate side effects
In‑flow Contextual Nudge UI
"As a caregiver, I want bite-sized tips to appear exactly where I am working so that I can fix issues without losing my place or redoing work later."
Description

Deliver lightweight, non-blocking UI components embedded directly within notes, task checklists, and clock-in/out flows that surface a single actionable tip only when needed. Nudges must include concise copy, severity iconography, one-tap fix actions (e.g., insert missing vitals block), or deep links to the relevant screen. The UI must support accessibility (WCAG AA), localization, haptic feedback, auto-dismiss, and rate limiting to avoid alert fatigue. It integrates with the trigger event stream, respects caregiver focus states, works offline with queued actions, and records telemetry for acceptance/dismissal outcomes.

Acceptance Criteria
Inline Nudge for Missing Vitals in Visit Note
Given a caregiver is editing a visit note for an active visit with a plan-of-care requiring vitals When the trigger stream emits a "missing_vitals" event for that visit Then show a single inline nudge adjacent to the relevant note section within 300 ms And include severity icon, <= 90-character copy, and a primary action labeled "Insert Vitals" And the nudge does not block typing, scrolling, or navigation And record telemetry fields: displayed_at, trigger_id, latency_ms, visit_id, user_id_hash
One-Tap Fix Applies Offline and Queues Sync
Given the device has no connectivity and a "missing_vitals" nudge is visible When the user taps "Insert Vitals" Then insert the vitals block into the note within 200 ms and move focus to the first field And show a non-intrusive toast: "Will sync when online" And queue the action and sync within 10 s of reconnection And log telemetry: action=accept, offline=true, queued_at, synced_at, success=true
Focus-Aware and Rate-Limited Nudge Delivery
Given the caregiver is actively typing or recording a voice note When a trigger arrives Then defer showing the nudge until 2 s of user idle And enforce max 1 nudge per 5 min per flow (notes, tasks, clock-in/out) And dismissing a nudge mutes identical triggers for 30 min for that visit And drop a deferred nudge if the issue is resolved before display And low-severity nudges auto-dismiss after 8 s of no interaction; medium/critical do not auto-dismiss
WCAG AA, Localization, and Haptic Compliance
Given the device language is Spanish and a screen reader is active When any nudge is displayed Then all text, controls, and aria-labels are localized (es-ES) and read correctly by the screen reader And color contrast is >= 4.5:1 for text and >= 3:1 for icons And the nudge is reachable and operable via keyboard/switch with a visible focus indicator And haptic feedback fires once per nudge and respects system haptics=off settings
Deep Link to Payer Form With Return
Given a nudge indicates a missing payer-specific form When the user taps "Open Form" Then navigate to the payer form screen within 500 ms, pre-filtered to the current visit and payer And on back, return to the original screen and scroll position And record telemetry events for deep_link_open and return_to_origin with timestamps
EVV Clock-Out Nudge for Missing Patient Signature
Given the caregiver initiates clock-out and EVV shows no patient signature captured When clock-out is attempted Then display a single high-severity nudge with icon and <= 90-character copy And provide actions: "Capture Signature" and "Dismiss" And tapping "Capture Signature" opens signature capture within 500 ms; on completion the nudge auto-dismisses and clock-out resumes And if dismissed, do not re-show for 15 min for that visit and log a dismissal event
Payer & Plan‑of‑Care Rules Configurator
"As an operations manager, I want to configure payer and plan-of-care checks without engineering help so that our agency stays compliant as requirements change."
Description

Provide an admin console for operations managers to author, version, and schedule payer-specific checks and plan-of-care requirements with effective dates, jurisdictions, and agency-level overrides. The configurator includes validation to prevent conflicting rules, a sandbox with sample visits for test runs, rule templates for common payers, change audit logs, and one-click rollback. It integrates with the rule engine via a signed rules registry, enforces permissions/approvals, and supports migration between staging and production environments.

Acceptance Criteria
Author New Rule from Template with Effective Dates and Jurisdiction
Given I am an Operations Manager with "Rule Author" permission When I create a new rule from the "Medicaid — ADL Visit Completeness" template or start from blank Then I am required to provide: Payer, Jurisdiction (State and optional County), Scope (Payer Check or Plan-of-Care), Effective Start (date-time), optional Effective End (date-time), Severity, User-Facing Tip, and Rule Logic expression And field-level validation messages are shown inline for any missing or invalid entries before Save is enabled And on Save as Draft, the rule is persisted as Version 1 with status "Draft" and a unique Rule ID And expression syntax and schema validation must pass; otherwise the save is blocked with specific error messages
Prevent Conflicting Rules on Publish
Given an existing active or scheduled rule overlaps the same Payer, Jurisdiction, and Scope When I attempt to publish or schedule a new rule whose effective window overlaps and targets the same check key Then the system blocks publish and displays a conflict list including conflicting Rule IDs, versions, effective windows, and scopes And I may save as Draft but cannot publish or schedule until conflicts are resolved And after adjusting dates or scopes so no overlap exists, Publish becomes enabled and succeeds
Apply Agency-Level Override with Precedence and Audit
Given a default jurisdiction-level rule is active When I create an Agency-level override for Agency A with its own effective window Then evaluations for Agency A use the override, while other agencies continue to use the default And the UI displays precedence: Agency Override > Jurisdiction Default > Global Template And when I remove or expire the override, Agency A reverts to the default without downtime And the override creation, updates, and removal are recorded in the audit log with actor, timestamp, and diffs
Versioning, Scheduled Activation, and Approval Workflow
Given an active rule exists When I edit it and choose "Save as New Version" with a future Effective Start Then a new version number is created, the current version remains active until the scheduled start, and both versions are visible in the timeline And the new version requires approval from a user with "Rule Approver" permission; until approved, it cannot be published And upon approval and publish, the signed rules registry is updated and the rule engine picks up the scheduled change without downtime
Sandbox Test Run on Sample Visits
Given a Draft or Scheduled rule When I run it in Sandbox against the provided sample visits set Then the system executes the rule and returns deterministic results with counts of Pass/Fail, affected visit IDs, and example details And repeated runs with the same inputs return identical results And the test run completes within 10 seconds for 500 sample visits And no changes are applied to production data or live triggers And I can export the test report as CSV and JSON
Change Audit Log and One-Click Rollback
Given a rule has one or more published versions When I open the change log Then I see a chronological record of create, edit, approve, publish, override, migrate, and rollback events with actor, timestamp, IP, version, and field-level diffs And when I select "Rollback" on a prior version and confirm a reason, the system reverts to that version, re-signs the package, and republishes without downtime And the rollback event is recorded with the reason and linked incident ID (if provided)
Signed Registry Promotion from Staging to Production
Given a rule package is published in Staging When I initiate "Promote to Production" with a user having "Environment Promote" permission Then the Production environment verifies the package signature, schema, and dependencies, and performs a dry-run validation against Production sample visits And if validation passes, the package is deployed with zero downtime and becomes active per its effective window And if validation fails, deployment is blocked with actionable errors and no changes applied And after deployment, the rule engine in Production reflects the new registry within 60 seconds
Voice Transcript NLP Miss Detection
"As a caregiver, I want my brief voice notes to automatically populate required documentation so that I don’t miss critical items under time pressure."
Description

Build an NLP layer that converts short streaming voice clips into structured data (entities like vitals, ADLs, meds, dosages, times, negations) and maps them to plan-of-care tasks to infer omissions with confidence scoring. The module must handle accents, background noise, and multiple languages, perform PII redaction, and emit incremental events compatible with the rule engine. When connectivity is limited, use on-device transcription with deferred enrichment. Provide tunable thresholds and evaluator tools to review false positives/negatives and improve models over time.

Acceptance Criteria
Streaming Transcription to Structured Entities & Incremental Events
Given a caregiver records a streaming voice note up to 60 seconds, When audio is captured, Then a partial transcript is produced within 1.5 seconds of start and updated at least every 500 ms until completion. Given a partial or final transcript update, When entity extraction runs, Then entities for vitals, ADLs, medications, dosages, times, and negations are extracted and emitted as incremental events within 300 ms of the transcript update. Given an emitted event, Then it conforms to RuleEngine Event Schema v1.0 and includes event_type, entity_type, value, units, confidence (0–1), timestamp, task_mapping (if any), revision_id, and source_segment_id. Given successive transcript updates, Then events are idempotent with monotonically increasing revision_id and preserve event order with ≥99.9% correctness under simulated network jitter.
Plan‑of‑Care Mapping and Omission Inference
Given an active plan‑of‑care defining required tasks for a visit, When entities are extracted from the transcript, Then tasks are marked satisfied only when rule definitions for sufficient evidence are met (e.g., vital name + value + units + time). Given a required task has no mapped evidence by session end or after 5 minutes of inactivity, Then the system emits an omission_detected event with a confidence score and the missing evidence fields. Given a negated entity (e.g., "no pain"), Then it does not satisfy a positive evidence requirement and contributes to omission inference per rules. Given an omission_detected event meets or exceeds the configured confidence threshold, Then a single in‑flow corrective tip is surfaced within the current workflow; otherwise no tip is shown. Given EVV timestamps for visit start/end, Then omission inference uses the correct time window and excludes entities outside the visit window.
Accent and Noise Robustness
Given a benchmark set spanning ≥6 common English accents and two noise conditions (SNR 5–15 dB), When processed, Then overall word error rate ≤12% and entity extraction F1 ≥0.90 for vitals, medications, dosages, times, and negations. Given background conversational/TV noise in the clip, When processed, Then the system avoids spurious entities with precision ≥0.95 on the benchmark set. Given transcription confidence falls below a configured floor, When omission logic evaluates, Then omission tips are suppressed unless corroborated by non‑voice signals (e.g., EVV/sensors) per rule configuration.
Multilingual and Code‑Switch Support
Given audio in English, Spanish, or a mix of both, When processed, Then language is detected per segment and entities are extracted with language tags and normalized to the canonical schema while preserving original text. Given multilingual or code‑switched input, Then plan‑of‑care task mapping accuracy yields entity‑level F1 ≥0.88 per language on curated test sets. Given language confidence < configured threshold, Then the system defaults to the caregiver’s preferred language and marks events with language_confidence and selected_language.
PII/PHI Redaction in Transcripts and Events
Given any transcript text or event payload, When stored or emitted beyond the secure processing boundary, Then PHI/PII (names, DOB, address, phone, SSN, MRN) are redacted or tokenized with typed placeholders before leaving device or enclave. Given a PHI redaction benchmark, When evaluated, Then redaction recall ≥99.5% and precision ≥98% across targeted entity types, and no raw PHI appears in logs, analytics, or rule‑engine events. Given an authorized auditor request, When data is accessed, Then raw audio/text retrieval requires role‑based access, dual approval, and all accesses are fully audit‑logged; exported artifacts remain redacted by default.
Offline On‑Device Transcription with Deferred Enrichment
Given loss of network connectivity during a visit, When the caregiver records voice notes, Then on‑device transcription proceeds and persists locally with encryption at rest and crash‑safe checkpoints. Given connectivity is restored, When sync runs, Then backlog transcripts upload within 60 seconds, cloud enrichment performs normalization and plan‑of‑care mapping, and incremental events are emitted in original chronological order. Given reconciliation conflicts between on‑device and cloud outputs, Then deterministic merge rules apply (server truth with revision lineage), and no duplicate omission tips are shown. Given the device remains offline for 24 hours, Then local storage remains within configured quota and the user receives a non‑blocking sync advisory.
Threshold Tuning and Evaluator Tooling for FP/FN Reduction
Given an admin with Evaluator access, When opening the tool, Then they can view per‑entity and per‑task precision/recall, confusion matrices, and adjust confidence thresholds by tenant, program, task, and language. Given threshold changes are saved, Then they propagate within 5 minutes, are versioned, and the applied threshold_version is included in subsequent events. Given a labeled evaluation set is uploaded, When evaluation runs, Then metrics are computed reproducibly, false positives/negatives are linkable to source artifacts with PHI safeguards, and results can be exported. Given a rollback is initiated, Then prior threshold/model versions can be restored and confirmed via checksum and audit trail.
EVV & Geofence Compliance Checks
"As a compliance coordinator, I want EVV and location validations to run automatically so that visits meet payer rules without manual auditing."
Description

Implement checks that validate clock-in/out events against scheduled windows, payer-specific tolerances, and geofenced visit locations, with configurable drift buffers for rural areas. The system flags early/late or out-of-geo events, suggests corrective actions (e.g., add justification note), and can auto-insert compliant documentation snippets when approved. It must function offline with cached geofences, reconcile when back online, and expose a supervisor review queue for exceptions. All events feed the trigger engine and compliance reporting.

Acceptance Criteria
Scheduled Window & Payer Tolerance Check at Clock Events
Given a scheduled visit with start/end windows and payer-specific early/late tolerances And a caregiver attempts a clock-in or clock-out When the event timestamp is evaluated Then the system classifies the event as On-time, Early by <minutes>, or Late by <minutes> using the configured tolerances And if Early/Late, the event is flagged with code EVV_TIME_NONCOMPLIANT and the delta in minutes stored And if On-time, the event is marked EVV_TIME_COMPLIANT And the evaluation record includes visit ID, caregiver ID, payer ID, schedule window, tolerance values applied, policy version, and server timestamp
Geofence Validation with Configurable Drift Buffers
Given a visit geofence with base radius (meters) and a configured rural drift buffer B (meters) when applicable And the device provides location and accuracy metadata When the caregiver clocks in or clocks out Then the system computes distance from event location to the geofence centroid And applies effective radius = base radius + B if the visit is marked rural or payer policy permits buffer And classifies the event as In-Geo or Out-of-Geo with distance and accuracy captured And Out-of-Geo events are flagged with EVV_GEO_NONCOMPLIANT And the applied effective radius and buffer source (rural/payer) are stored in the audit record
Offline EVV Capture with Cached Geofences and Reconciliation
Given the device is offline and cached schedules/geofences exist not older than 24 hours When the caregiver records a clock-in or clock-out Then the app validates locally against cached data and last-known policy version And stores the event with raw location, accuracy, local evaluation result, and status=PENDING_SYNC And upon connectivity restoration Then the server revalidates using current authoritative data and policy version And reconciles differences, preserving both local and server evaluations in an immutable audit trail And updates compliance flags, triggers, and reports within 5 minutes of sync completion
Contextual Corrective Actions and Auto-Insert Snippets
Given an EVV event is flagged Early/Late and/or Out-of-Geo When the caregiver opens the in-flow tip Then payer-appropriate corrective options are displayed (e.g., justification note, window override request) with required fields enforced And submission is blocked until mandatory fields are completed And upon submission and, if configured, supervisor approval Then a compliant documentation snippet is auto-inserted into the visit note with timestamp, user attribution, and remediation code And the original flag is cleared or downgraded per payer rules with rationale stored
Supervisor Exception Review Queue
Given EVV exceptions exist for one or more visits When a supervisor opens the review queue Then they can filter by branch, payer, exception type, date range, caregiver, and SLA age And for each item, an audit panel displays event times, deltas, geo distance, map preview, accuracy, applied tolerances/buffers, notes, and prior actions And the supervisor can approve, reject, request info, or bulk-approve eligible items And the system records decision, user, timestamp, reason code, and any comments And resolved items are removed from the queue within 5 seconds and reflected in compliance reports
Event Feed to Triggers and Compliance Reporting
Given any EVV evaluation or reconciliation result is finalized When the result is persisted Then an event payload is emitted to the trigger engine within 10 seconds including visit ID, caregiver ID, payer ID, event type, time delta, geo status, distance, accuracy, policy version, and resolution status And the payload uses idempotency keys to prevent duplicates And compliance reports ingest these events within the next reporting cycle (≤15 minutes) with consistent aggregations across time and geo metrics
Trigger Analytics & Continuous Tuning
"As an operations leader, I want insight into which nudges prevent rework and which cause noise so that we can tune rules to maximize compliance with minimal interruption."
Description

Create a dashboard and data pipeline that aggregates trigger rates, acceptance/dismissal outcomes, false-positive reports, and time-to-fix by agency, caregiver, payer, and rule version. Provide experiment support (A/B thresholds, copy variants), alert fatigue monitoring, and export APIs for BI tools. Include a feedback loop that lets users mark a nudge as “Not Relevant,” feeding back to the rule engine to adjust thresholds or suppress patterns. Ensure privacy controls, data retention policies, and audit-ready summaries for regulators.

Acceptance Criteria
Analytics Dashboard: Aggregated KPIs by Dimension
Given a signed-in Operations Manager with access to an agency, When they open the Trigger Analytics dashboard for the default last 7 days, Then the dashboard displays KPIs: trigger_rate, acceptance_rate, dismissal_rate, false_positive_rate, median_time_to_fix, p90_time_to_fix, and total_nudges. Given agency, caregiver, payer, and rule_version filters, When any filter or date range is applied, Then all KPIs, charts, and tables recalculate to reflect the selection and remain consistent across widgets. Given at least 100k trigger events in the selected range, When the dashboard loads or filters change, Then results render within 2 seconds at the 95th percentile. Given raw event counts in the data store, When totals are compared to dashboard aggregates, Then counts reconcile within ±0.5% or ±5 events (whichever is greater). Given a KPI or chart, When the user drills down (e.g., clicks a rule_version bar), Then a detail table shows contributing events with columns: timestamp, caregiver_id, agency_id, payer_id, rule_version, nudge_id, outcome (accepted/dismissed/ignored), time_to_fix_ms.
Experimentation: A/B Thresholds and Copy Variants
Given an Admin defines an experiment on a rule_version, When they create variants (Control + up to 2 variants) with threshold and copy text differences, Then the system assigns caregivers using randomized, stable assignment at caregiver_id granularity with default 50/50 (or 33/33/33) split and supports custom ratios. Given active experimentation, When a caregiver is first exposed to the rule during the experiment window, Then the assignment is persisted for 90 days or until experiment ends, whichever comes first. Given experiment exposures and outcomes, When viewing the experiment results, Then the dashboard shows per-variant metrics (trigger_rate, acceptance_rate, false_positive_rate, time_to_fix median/p90) and a minimum detectable effect and 95% CI once n>=500 exposures per variant. Given overlapping experiments, When an Admin attempts to start another experiment on the same rule+payer+agency, Then the system blocks it with a clear error and suggests scheduling or scoping changes. Given an experiment is ended (manual or auto-stop on power reached), When the rule is promoted, Then the selected winning configuration becomes a new rule_version and the change log records author, rationale, and timestamp.
Alert Fatigue Monitoring and Controls
Given caregiver-level trigger events, When computing alert fatigue, Then the system calculates daily and 7-day rolling triggers_per_caregiver, average time_between_nudges, and dismissal_rate to produce a fatigue_score 0–100. Given configurable thresholds, When fatigue_score exceeds the agency-defined limit, Then the system caps nudges at max_nudges_per_shift (default 5) and defers lower-priority nudges with a reason code (fatigue_cap) logged. Given capped or deferred nudges, When reviewing the fatigue monitor view, Then Managers see caregivers and rules impacted, counts of capped nudges, and suggested tuning actions. Given high dismissal_rate (>60%) for a rule over 7 days with >200 nudges, When the condition persists, Then the system creates a tuning suggestion and notifies the rule owner via in-app alert and email. Given any fatigue-related suppression, When exporting analytics, Then suppression events are included with fields: caregiver_id, rule_version, cap_reason, suppressed_count, timestamp.
Export APIs for BI Tools
Given an authorized OAuth2 client (client credentials flow), When it requests /exports/triggers with valid scope, Then the API returns 200 with cursor-based pagination and schema fields: event_id, timestamp, agency_id, caregiver_id, payer_id, rule_version, nudge_text_variant, outcome, time_to_fix_ms, is_false_positive, experiment_id, fatigue_cap_flag. Given large datasets, When the client paginates using next_cursor, Then the API returns stable ordering by timestamp,event_id and supports backfills up to 400 days. Given filter parameters (date_from/date_to, agency_id, payer_id, rule_version, outcome), When applied, Then results include only matching records and the server responds within 3 seconds for up to 1M records per export job. Given rate limits of 600 requests/min per client, When limits are exceeded, Then the API returns 429 with Retry-After and no data loss on subsequent retries. Given PII/PHI constraints, When exporting, Then only de-identified caregiver_id and agency_id are included (no names or notes text) unless the client has role=data_admin, in which case minimally necessary fields are added under a signed data use agreement flag.
Feedback Loop: 'Not Relevant' and Adaptive Suppression
Given a displayed nudge, When a caregiver taps Not Relevant and selects a reason (Other optional free text allowed), Then the event is recorded with contextual features (time, payer, task type, note keywords, rule_version, location flag) within 200 ms. Given accumulated Not Relevant events for a specific rule pattern exceed 30 in 14 days with dismissal_rate>50%, When the nightly tuner runs, Then it proposes either threshold tightening or context-based suppression and creates a draft change. Given a draft change, When a Manager approves it, Then the system publishes a new rule_version or a suppression list entry with effective_from timestamp and links the approval to the audit log. Given a published change, When the same context recurs, Then nudges are reduced by at least 20% over the next 7 days without increasing false_negative rate beyond the configured guardrail (<=5% increase measured via holdout cohort). Given a mistaken Not Relevant, When a caregiver undoes within 10 minutes, Then the event is retracted and excluded from tuning datasets.
Privacy, Retention, and Access Controls
Given role-based access (Admin, Manager, Caregiver), When accessing analytics, Then Caregivers see only their own metrics, Managers see their agency, and Admins see all assigned agencies; PHI/PII fields are masked unless minimum necessary access is granted. Given data in transit and at rest, When inspected, Then all exports and APIs use TLS 1.2+ and storage is encrypted with AES-256; access is logged with user_id, timestamp, resource, action, and IP. Given payer- or agency-specific retention settings, When configured (e.g., 2 years for payer X, 7 years for regulatory audit summaries), Then the system enforces deletion or archival automatically and provides evidence logs of disposition. Given a Right-to-Delete request for a caregiver, When processed, Then personal identifiers are purged or pseudonymized within 30 days without breaking aggregate analytics (aggregates remain via differential privacy-safe counts where needed). Given an access attempt outside assigned org boundaries, When detected, Then the system denies access with 403 and records a security event with alerting to Admins.
Audit-Ready Regulatory Summaries
Given a compliance auditor request, When a Manager clicks Generate Audit Summary and selects a date range and payer, Then a PDF and CSV are produced within 60 seconds containing: counts by rule_version, acceptance/dismissal/false-positive rates, time_to_fix distributions, experiment summaries, fatigue suppression events, and change log entries. Given privacy requirements, When generating summaries, Then the output excludes free-text notes and includes only de-identified caregiver and member IDs unless a regulator role is used with explicit legal basis documented. Given the generated report, When validated, Then each metric can be traced to underlying immutable event IDs stored in an append-only audit log with hash chaining and signature of the report payload. Given prior reports, When re-generated for the same parameters, Then outputs match prior results byte-for-byte or include a version note if a backfill correction was applied with justification. Given retention policy of 7 years for audit summaries, When the retention period is reached, Then summaries are archived to WORM storage and retrieval logs are maintained.

RoleFit Cards

Tailors coaching to caregiver role, credential, payer, and client diagnosis. Shows only relevant cues—RN wound care phrases vs. HHA ADL prompts—so guidance feels personal, reduces noise, and boosts completion rates without slowing visits.

Requirements

Context Profile Resolver
"As a caregiver, I want the app to automatically recognize my role, credential, payer, and my client's diagnosis for this visit so that I only see guidance that fits my situation without extra steps"
Description

Compute a real-time visit context profile by resolving caregiver role and credentials, client diagnosis codes and care plan, payer and plan policy, visit type, and current workflow steps. Pull required attributes from existing CarePulse entities (user profile, scheduled visit, client chart) and enrich locally with recent voice clip keywords and available IoT sensor signals. Cache non-PHI lookup tables on device and refresh on login to support offline use. Expose a lightweight context object to downstream components to drive RoleFit Card selection without additional network calls. Guarantee sub-200ms local resolution on mid-tier devices and graceful degradation when some inputs are unavailable, ensuring the feature never blocks documentation or routing

Acceptance Criteria
Real-time Context Resolution with All Inputs Available
Given the device is online and the user is authenticated for a scheduled visit linked to a client chart and payer policy And the caregiver profile includes role and credentials And the current workflow step is Start Visit When the Context Profile Resolver is invoked Then it returns a context object containing caregiver.role, caregiver.credentials[], client.diagnosisCodes[], carePlan.items[], payer.planId, payer.policyRules[], visit.type, workflow.step And all values reflect the latest records at invocation time And the context includes meta.timestamp and meta.resolverVersion
Local Resolution Performance on Mid-tier Devices
Given a device meeting the mid-tier spec (e.g., 6 CPU cores, 3–4 GB RAM) with a warm app state When the Context Profile Resolver is invoked 100 times under typical payloads Then the p95 end-to-end resolution time is ≤ 200 ms measured from call to returned context And the p99 end-to-end resolution time is ≤ 300 ms And no single invocation pegs CPU above 70% for > 100 ms And no frame drops or UI jank are observed during invocations triggered from UI interactions
Offline Use with Cached Non-PHI Lookups
Given the user last logged in within 24 hours and device caches were refreshed at login And the device is offline (airplane mode enabled) When the Context Profile Resolver is invoked Then it completes without issuing any network requests And it uses cached non-PHI lookup tables for role/credential metadata, payer policy rules, and diagnosis code metadata And the produced context includes meta.source = "cache" and meta.cacheAge ≤ 24h And verification of cached payload schemas shows no client identifiers, names, dates of birth, or free text are stored
Voice Clip and IoT Signal Enrichment
Given a voice clip was recorded within the last 2 minutes and keyword extraction has completed And a connected IoT sensor stream is available with current readings When the Context Profile Resolver is invoked Then context.enrichment.voice.keywords includes the top 5 keywords with score ≥ threshold and timestamp ≤ 2 minutes old And context.enrichment.sensors includes derived status flags (e.g., heartRateStatus) per defined thresholds And enrichment entries older than 10 minutes are excluded And if either voice or sensors are unavailable, the other enrichment still appears without error
Graceful Degradation with Missing Inputs
Given the client chart is missing payer policy data and no IoT signals are available When the Context Profile Resolver is invoked Then it returns a context object without throwing errors And missing segments are represented as nulls or empty arrays with meta.incomplete = true and meta.missingFields including payer.policyRules and enrichment.sensors And RoleFit card selection proceeds using available fields without additional retries or network calls
Non-Blocking Behavior for Documentation and Routing
Given opening the documentation screen or updating the route triggers the Context Profile Resolver When the resolver exceeds 200 ms or encounters an error Then the target screen finishes loading without added latency > 50 ms compared to baseline And a fallback minimal context (caregiver.role, visit.type) is delivered to RoleFit within 100 ms And the error is logged asynchronously without presenting a blocking dialog to the user
Lightweight Context Object and No Additional Network Calls
Given the RoleFit selection engine requests the visit context When the Context Profile Resolver returns the context Then the serialized context payload size is ≤ 10 KB And it contains only the whitelisted fields: caregiver.role, caregiver.credentials[], client.diagnosisCodes[], carePlan.items[], payer.planId, payer.policyRules[], visit.type, workflow.step, enrichment.*, meta.* And zero network requests are made during retrieval (verified via network inspection) And RoleFit selection completes successfully without issuing additional fetches for context
Adaptive Card Selection Engine
"As an operations manager, I want role- and payer-aware cues to be selected by configurable rules so that our teams get accurate, consistent guidance across agencies and payers"
Description

Select and rank RoleFit Cards using a configurable rule engine that matches the context profile to card eligibility rules across role, credential, payer, diagnosis, visit type, and task state. Support boolean logic, effective dates, plan overrides, and mutually exclusive groups to prevent noise. Provide deterministic tie-breaking and frequency capping so cards remain concise and non-repetitive during a visit. Run on-device where possible with a compact rules bundle; fall back to server evaluation when needed. Integrate with CarePulse feature flags to enable phased rollouts and with the documentation module to surface only cards that can map to structured note fields or compliant phrases

Acceptance Criteria
Attribute Rule Matching and Boolean Logic
Given a card with rule: (role = "RN" AND credential IN {"RN","RN-BSN"}) AND NOT (visitType = "Companion") And a context with role = "RN", credential = "RN-BSN", visitType = "Wound Care" When the engine evaluates eligibility Then the card is returned Given the same card rule And a context with role = "RN", credential = "LPN", visitType = "Wound Care" When the engine evaluates eligibility Then the card is not returned due to credential mismatch Given the same card rule And a context with role = "RN", credential = "RN-BSN", visitType = "Companion" When the engine evaluates eligibility Then the card is not returned due to NOT clause Given a card with rule: (payer IN {"Medicare","Medicaid"} OR diagnosis IN {"G30.9"}) And a context with payer = "Commercial", diagnosis = "G30.9" When the engine evaluates eligibility Then the card is returned Given the same card rule And a context with payer = "Commercial", diagnosis = "F41.1" When the engine evaluates eligibility Then the card is not returned
Effective Dates and Plan Overrides
Given a card with effectiveStart = 2025-09-01 and effectiveEnd = 2025-09-30 And a visitDate = 2025-09-15 When the engine evaluates eligibility Then the card is returned Given the same card And a visitDate = 2025-10-01 When the engine evaluates eligibility Then the card is not returned due to effective window Given a client-level plan override that excludes the card And visitDate within the effective window When the engine evaluates eligibility Then the card is not returned and the suppression reason is "override_exclude" Given a client-level plan override that includes the card And the base rule would exclude it When the engine evaluates eligibility Then the card is returned and the reason includes "override_include" Given conflicting overrides at org and client plan levels When the engine evaluates eligibility Then the client plan-level override takes precedence and the audit log records the applied override id and level
Mutually Exclusive Groups
Given two eligible cards A and B with mutexGroup = "wound-dressing" and groupPriority A = 20, B = 10 When the engine evaluates eligibility Then only card A is returned and card B is suppressed with reason "mutually_exclusive" Given three eligible cards C, D, E with the same mutexGroup and equal groupPriority When the engine evaluates eligibility Then exactly one card is returned based on deterministic tie-breaker and the others are suppressed with reason "mutually_exclusive" Given eligible cards from different mutex groups When the engine evaluates eligibility Then all may be returned subject to global limits
Deterministic Tie-Breaking
Given multiple eligible cards with equal score and equal priority And the configured tiebreak chain is [priority DESC, updatedAt DESC, id ASC] When the engine orders the results Then the returned order strictly follows the configured chain and is stable across repeated runs Given identical inputs evaluated on-device and on-server When the engine orders the results Then the returned set and order of card ids are identical Given two evaluations with the same inputs and configuration When results are compared Then the order is identical across runs
Frequency Capping Within Visit
Given a card C with frequencyCapPerVisit = 1 And C has already been shown once during the current visit When the engine evaluates again Then C is not returned and suppression reason is "frequency_cap" Given a card D with cooldownPerTask = 15 minutes And D was shown 10 minutes ago for task = "Wound Dressing" When the engine evaluates within the same task Then D is not returned Given the cooldown has elapsed When the engine evaluates within the same task Then D may be returned Given a new visit has started When the engine evaluates Then per-visit caps are reset and prior visit exposures do not suppress results Given the device is offline When exposures are recorded locally Then exposure counts persist and sync on reconnect without duplicate surfacing across devices
On-Device Evaluation with Server Fallback
Given a valid local rules bundle version = 3 is cached And feature "local_rules_eval" is enabled When the engine evaluates Then evaluation runs on-device and completes within 150 ms p95 for 50 eligible cards Given the local rules bundle is missing or expired When the engine evaluates Then the engine calls the server evaluation endpoint and completes within 800 ms p95 on LTE Given the server is unavailable When the engine evaluates Then the engine falls back to the last-known-good local bundle if present, else returns an empty result with error code "evaluation_unavailable" and no crash Given identical inputs evaluated locally and via server When results are compared Then the returned card ids and order are identical Given local evaluation memory usage exceeds 50 MB during processing When the engine detects the condition Then the engine aborts local evaluation and switches to server fallback
Feature Flags and Documentation Mapping Gate
Given feature flag "rolefit_cards" is enabled for org = "Alpha" and disabled for org = "Beta" When the engine evaluates for users in each org Then only users in org "Alpha" receive cards Given a card lacks a valid mapping to structured note fields or compliant phrases for the current visitType and payer When the engine evaluates eligibility Then the card is filtered out with suppression reason "no_mapping" Given a card has mapping for locale = "en-US" And the user locale is "es-US" And no "es-US" mapping exists When the engine evaluates Then the card is filtered out with suppression reason "no_mapping_locale" Given remote feature flag values change during a session When the next evaluation runs Then new flag values are honored without requiring app restart
Card Content Management & Versioning
"As a clinical supervisor, I want to manage and publish tailored card content with version control so that guidance stays accurate, compliant, and aligned with our documentation standards"
Description

Provide an admin experience to author, localize, and version RoleFit Card content with templates for RN wound care phrases, HHA ADL prompts, medication reminders, and payer-specific wording. Allow tagging by role, credential, diagnosis (ICD-10), visit type, and payer plan, with start/end effective dates. Include preview against sample contexts, draft/publish workflows, rollback to prior versions, and change history with editor and timestamp. Deliver content to devices via delta updates and validate for conflicts and missing mappings before publish. Ensure content blocks map to structured note fields and phrase libraries used by CarePulse auto-population to maintain consistency across documentation and reporting

Acceptance Criteria
Author and Publish Localized RN Wound Care Template
Given I am an admin with content-author permissions When I create a new RoleFit Card using the RN Wound Care template for locales en-US and es-US And I add payer-specific wording for Medicare and tag Role=RN, Credential=RN, ICD-10=L97.909, Visit Type=Wound Care Follow-up, Payer Plan=Medicare And I set Effective Start Date to a future date and Effective End Date to null Then the system saves the content as Draft with a unique version identifier And the Draft passes schema validation and localization completeness checks for required locales And when I click Publish, the system publishes the Draft, records editor and timestamp, and marks version status as Published And the Published content is queryable by locale, role, credential, diagnosis, visit type, and payer plan
Tagging and Effective Dates Validation
Given a Draft RoleFit Card is ready to publish When required tags (role, credential, ICD-10, visit type, payer plan) are missing Then Publish is blocked and field-level errors identify each missing tag When Effective End Date is before Effective Start Date Then Publish is blocked with a date-range validation error When another Published card exists with identical tag set and overlapping effective dates Then Publish is blocked with a conflict error that lists the conflicting card IDs and date ranges When an ICD-10 code is not valid per the current ICD-10 catalog Then saving or publishing is blocked with an invalid-diagnosis error
Preview Against Sample Contexts
Given I open Preview for a Draft RoleFit Card When I select Sample Context: Role=HHA, Credential=HHA, ICD-10=I10, Visit Type=ADL Support, Payer Plan=Medicaid, Locale=en-US Then the Preview renders only HHA-relevant cues and shows resolved variables and payer wording And switching Locale to es-US renders the localized text for all visible cues And the Preview displays the mapped structured note fields alongside each content block And the Preview loads within 800 ms on a median device profile
Versioning and Rollback
Given a RoleFit Card has Published versions v1 and v2 and a Draft v3 When I select v1 and click Rollback Then the system creates a new Draft that is a copy of v1, links history to the original, and leaves v1 and v2 immutable And when I Publish the rollback Draft, a new Published version is created with a new identifier and full history preserved And clients requesting the card receive the latest Published version only
Change History Audit Trail
Given a RoleFit Card has undergone create, edit, publish, and rollback actions When I view Change History Then I see a chronological list with action type, editor identity, timestamp (ISO 8601, UTC), version identifiers, and a diff of changed fields And I can filter history by action type and editor and export it to CSV And all history entries are read-only and tamper-evident (signed with system checksum)
Delta Update Delivery to Devices
Given content version v10 is Published When mobile devices with v8 sync while online Then each device receives only the delta from v8 to v10 containing changed card IDs and locales, not the full catalog And the update applies within 2 minutes of publish for online devices and on next connection for offline devices And after sync, device content hash matches the server-provided hash and the client reports v10 as current And unknown fields are ignored by clients without causing errors
Content Mapping Consistency with Phrase Libraries
Given a Draft contains content blocks mapped to structured note fields and phrase library keys When I run Validate before Publish Then the validator confirms that each content block has a valid mapping to a structured note field and an existing phrase key And if any mapping is missing or points to a deprecated key, Publish is blocked with specific error messages listing the block IDs And after Publish, generating a sample visit note auto-populates the mapped fields using the same phrase keys, producing identical text as shown in Preview And audit-ready report generation uses the structured fields from the card without discrepancies
Mobile Card UI with One-tap Insert
"As a caregiver on a time-pressed visit, I want unobtrusive cards I can insert into my notes with one tap so that I can stay compliant without slowing down care"
Description

Design a mobile-first card interface that surfaces only a small, prioritized set of relevant cues inline with the visit workflow. Support quick interactions: swipe to dismiss, tap to expand details, and one-tap insert of approved phrases into the active visit note section with proper attribution and time-stamping. Respect accessibility settings, dark mode, and large text, and maintain 60fps scrolling on low-end devices. Preload the next likely cards to avoid jank and operate fully offline with queued inserts that sync when connectivity returns. Avoid obstructing navigation, timers, or voice capture, and ensure cards can be recalled from a compact tray within two taps

Acceptance Criteria
Inline Role-Fit Card Surfacing
Given the caregiver profile includes role, credential, payer context, and the client has a documented diagnosis When the caregiver opens a visit and reaches a workflow step with an active note section Then display an inline stack of at most 3 cards prioritized by relevance to role/credential/payer/diagnosis And exclude any cards that do not match the current role/credential/payer/diagnosis context And position cards inline with the active note section without requiring extra navigation
One-Tap Insert with Attribution & Timestamp
Given a card with an approved phrase is expanded and a visit note section is active When the caregiver taps the Insert action once Then the phrase is inserted at the cursor in the active visit note section without overwriting existing content And the inserted text includes attribution to the card (name/ID) and user ID And a local timestamp in ISO 8601 with timezone is appended or metadata-stored and visible in the audit log And an audit event is recorded linking visit ID, note section, card ID, user ID, and timestamp And the insert succeeds while offline and is queued for sync
Swipe to Dismiss and Tap to Expand
Given a card is visible in the inline stack When the caregiver swipes the card horizontally beyond the dismissal threshold Then the card is dismissed from the stack for the current session and does not resurface automatically And the dismissed card becomes available in the compact tray for recall Given a card is collapsed When the caregiver taps the card Then the card expands to reveal full details and the Insert action And only one card is expanded at a time; expanding a new card collapses the previously expanded card
Performance & Preloading on Low-End Devices
Given a reference low-end device (Android 10, 2GB RAM) and a visit with 10+ candidate cards When the caregiver scrolls the visit screen with cards in view for 30 seconds Then average frame time is ≤16.7ms and P95 frame time is ≤32ms (dropped frames <1%) Given the top card is in view When the caregiver pauses scrolling for 200ms Then the next 2 likely cards are preloaded (text and local media) so that opening either renders in ≤200ms P95 And preloading does not degrade frame times by more than 2ms during active scrolling And there are no main-thread tasks >50ms P95 during card interactions
Offline Operation & Sync of Queued Inserts
Given the device has no network connectivity When the caregiver performs a one-tap insert from a card Then the phrase appears immediately in the active note section and an insert event is queued locally with visit ID, section, card ID, user ID, and timestamp And the queued event persists across app restarts Given connectivity is restored When background sync runs Then all queued inserts are transmitted and applied within 30 seconds And the remote note reflects the same insertion order without data loss And the audit trail includes a server-acknowledged timestamp for each insert And on content conflicts the client merges without surfacing merge markers to the caregiver
Accessibility, Dark Mode, and Large Text Support
Given system dark mode is enabled When cards are displayed Then all card surfaces and text use dark theme colors with text contrast ratios ≥4.5:1 Given system font size/scaling is set to the maximum (200%) When cards, actions, and tray are rendered Then all text scales without truncating primary actions and all tappable targets are ≥48x48dp Given a screen reader is active When focusing on a card Then the card has descriptive labels, logical focus order (title → content → actions), and exposes actions for Expand, Insert, and Dismiss Given Reduce Motion is enabled When expanding, collapsing, or dismissing cards Then motion animations are replaced with minimal fade transitions ≤150ms
Non-Obstruction of Core Controls & Two-Tap Recall from Tray
Given visit navigation, timers, and voice capture controls are visible When cards are shown inline Then cards do not overlap or block interaction with navigation, timer, or voice capture controls and voice capture continues uninterrupted Given a card was dismissed When the caregiver opens the compact tray and selects that card Then the card is restored to the stack within at most two taps (tap tray, tap card) And the tray is reachable from the visit screen without leaving the workflow and remains usable with large text and dark mode
Payer Compliance Mapping & Audit Trail
"As a compliance officer, I want card usage and inserted text tied to payer rules and recorded for each visit so that we can produce audit-ready evidence on demand"
Description

Map each card and inserted phrase to payer policy references, plan identifiers, and visit requirements, storing these links with the visit record. Log card impressions, dismissals, and insertions with user, timestamp, context profile hash, and content version to support one-click, audit-ready reports. Validate that inserted phrases meet payer wording constraints before committing to the note, and flag conflicts for user confirmation. Expose exports and APIs consumed by CarePulse compliance reporting so agencies can demonstrate that guidance shown and documentation captured aligned to payer and diagnosis at the time of service

Acceptance Criteria
Map Persistence on Visit Record
Given a caregiver opens a visit tied to payer X and plan Y When a RoleFit card is rendered or a phrase from that card is inserted Then the visit record stores payer_id, plan_id, policy_reference_ids, visit_requirement_ids, card_id, phrase_id (nullable if not inserted), and content_version And the stored mapping is persisted with the visit and retrievable via visit compliance detail API And required fields (payer_id, plan_id, card_id, content_version) are non-null for every stored mapping entry And once the visit is submitted/locked, the stored mappings become immutable
Event Logging for Card Lifecycle
Given user U opens a visit with context profile hash H and content version V When a RoleFit card is shown to the user Then an impression event is recorded with event_type=impression, user_id=U, timestamp (UTC ISO-8601), context_profile_hash=H, card_id, payer_id, plan_id, visit_id, content_version=V And when the user dismisses the card, a dismiss event is recorded with event_type=dismiss and optional dismissal_reason And when the user inserts a phrase, an insert event is recorded with phrase_id and insertion_location And events are append-only, orderable by timestamp, and multiple impressions create distinct events
Pre-Commit Payer Wording Validation
Given payer X for diagnosis D has active wording constraints C When the user attempts to insert phrase P into the visit note Then P is validated against constraints C before the note is committed And if validation passes, the phrase is inserted and mapped to its policy_reference_ids and visit_requirement_ids And if validation fails, a conflict dialog lists violated constraints and offers: view allowed wording, replace with compliant variant, or override with justification And if the user overrides, a justification of at least 15 characters is required and an override event with violated_constraints and justification is logged And validation completes within 500 ms on a mid-tier mobile device using locally cached rules
One-Click Audit Report Generation
Given an auditor requests the compliance report for visit V When the user triggers one-click audit report Then the system generates a report containing visit metadata, payer_id, plan_id, diagnosis, content_version, context_profile_hash, all card impressions/dismissals/insertions with timestamps, mappings to policy references and visit requirements, final note text segments sourced from RoleFit phrases, and any overrides with justifications And the report is available as JSON and PDF And the report is generated within 3 seconds for a single visit and matches underlying log and mapping records And the report includes a verification checksum and report_version
Compliance Export and API Availability
Given a reporting client with scope compliance.read is authenticated When it calls the compliance export API with date range, payer_id/plan_id filters, and pagination parameters Then the API returns HTTP 200 with a paginated list of mapping and event records including visit_id, user_id, timestamp, event_type, card_id, phrase_id, payer_id, plan_id, policy_reference_ids, visit_requirement_ids, content_version, context_profile_hash, override_justification (if any) And CSV and JSON formats are supported via an accept header or query parameter And unauthorized or insufficient-scope requests are rejected with 401/403 And the API is versioned (e.g., v1); breaking changes require a new version And p95 response time for 10,000-record pages is under 2 seconds in staging benchmarks
Content Version Binding and Reference Integrity
Given content library version V is the active version at time T When a RoleFit card is displayed or a phrase is inserted Then the logged content_version equals V and policy_reference_ids resolve to the definitions as of version V And if a policy reference is later updated or deprecated, visits logged under V continue to resolve to archived V definitions And a daily integrity job reports zero orphaned card_id/phrase_id/policy_reference_ids; or else raises alerts with specific identifiers
Offline Capture and Sync of Compliance Logs
Given the device is offline during a visit When RoleFit cards are shown, dismissed, or phrases are inserted Then mapping entries and lifecycle events are queued locally with original timestamps and a monotonic sequence number And upon reconnection, the queue syncs in order with idempotency keys to prevent duplicate server records And if sync fails, the app retries with exponential backoff and surfaces a non-blocking banner until success And no insertion is committed to the note without local validation; if remote rules are required, the insertion is blocked with a clear message until connectivity returns
Engagement Analytics & A/B Testing
"As a product owner, I want to measure how RoleFit Cards affect completion rates and documentation time so that we can iterate on content and rules to maximize impact"
Description

Capture anonymized telemetry on card display rate, interaction types, insert conversions, task completion impact, and dwell time, attributed to context dimensions (role, credential, payer, diagnosis) and content versions. Provide dashboards and export to existing CarePulse analytics to track completion-rate lift and time-on-doc reductions. Enable remote A/B tests for card wording, order, and selection thresholds with guardrails to prevent compliance regressions. Feed insights back into content management and rule tuning to continually reduce noise and improve outcomes

Acceptance Criteria
Event Capture, Privacy, and Delivery Reliability
Given a caregiver uses RoleFit Cards during a visit, When the app logs events of types [card_displayed, card_expanded, card_dismissed, suggestion_inserted, voice_clip_recorded, note_autofilled, task_completed], Then each event includes event_id, event_type, timestamp_ms, session_id, visit_id_hash, caregiver_id_hash, content_version_id and is persisted to the telemetry store with 99.5% success within 5 minutes of occurrence. Given the device is offline, When events are generated, Then they are queued locally with encryption at rest and are uploaded within 2 minutes of reconnection preserving order by timestamp. Given PII fields (names, free-text notes, exact GPS), When constructing telemetry, Then no raw PII is transmitted and only salted hashes for identifiers plus coarse location (>=5 km) may be included. Given the same event is retried, When received by the server, Then deduplication by event_id ensures idempotent writes with at-most-once persistence.
Context and Version Attribution
Given a RoleFit Card event is logged, When context is attached, Then fields role, credential, payer, primary_diagnosis_code, rule_id, selection_threshold, and content_version_id are present and non-null for at least 99% of events. Given a known test caregiver (role RN, credential RN, payer Medicare, diagnosis E11.9), When they trigger a card, Then the event context exactly matches these values. Given content rules determine card selection, When a card is shown, Then the event stores the evaluated rule_id and threshold values used for selection.
Engagement & Impact Dashboard KPIs
Given an admin opens the Analytics dashboard, When a date range and filters (role, credential, payer, diagnosis, content_version_id) are applied, Then the dashboard loads within 3 seconds at P95 and shows KPIs: card display rate per session, median and P90 dwell time, suggestion insert conversion rate, task completion rate delta vs. baseline, and time-on-documentation delta (minutes). Given a KPI widget is clicked, When drilling down, Then the top and bottom 10 cards by lift are listed with sample sizes and confidence intervals. Given filters are cleared, When no filters are applied, Then KPIs reflect all data for the selected period and match exported totals within 1%.
Export to CarePulse Analytics
Given the daily batch export runs at 02:00 UTC, When delivered to the analytics bucket, Then Parquet files partitioned by dt and context exist with documented schema and 99% on-time SLA. Given streaming export is enabled, When events arrive, Then they are forwarded to the analytics pipeline within 30 seconds at P95 with signed requests and retried with exponential backoff on failure. Given a file re-delivery occurs, When the same content is sent, Then exports are idempotent via content-addressed filenames and do not create duplicate records downstream.
Remote A/B Test Configuration & Randomization
Given an experiment targeting RoleFit Card wording, order, or selection thresholds is created, When enabled remotely, Then eligible users are bucketed with stable, uniform randomization (default 50/50) stratified by role, credential, payer, and diagnosis. Given exposure tracking, When a user qualifies for an experiment, Then exactly one variant is assigned per session and an exposure event is logged before any outcome events. Given minimum sample sizes are configured, When thresholds are met, Then the dashboard displays variant lift estimates with p-values and power assumptions using the predefined statistical method.
Compliance Guardrails & Safe Launch
Given an experiment is configured, When guardrails are defined, Then launch is blocked until thresholds for on-time-visit rate and documentation completeness are set and validated. Given an active experiment, When guardrail metrics breach thresholds for two consecutive hours or 100 affected sessions (whichever first), Then the system auto-pauses the variant and sends alerts in-app and via email within 5 minutes. Given an experiment action occurs (launch, pause, resume, stop), When viewing audit logs, Then entries show actor, timestamp, guardrail definitions, reason, and affected variants.
Insights Feedback into Content & Rule Tuning
Given KPI lifts are computed per card and context, When a card underperforms (lift < 0 with p < 0.05) for any context, Then the CMS flags it with suggested actions: adjust wording, lower selection threshold, or suppress for that context. Given a suggestion is accepted in CMS, When published, Then content_version_id increments, the prior version is archived read-only, and the change links to the supporting analytics snapshot. Given a rule change is deployed, When monitoring post-change for 7 days, Then the dashboard shows pre/post lift comparison and highlights statistically significant improvements or regressions.

Sensor Aware

Detects when expected IoT readings (vitals, activity, medication dispenser) are missing or stale and nudges the caregiver to re‑pair, capture a reading, or log a reason. Keeps notes clinically complete and defensible with minimal extra taps.

Requirements

Smart Staleness Detection
"As a caregiver on a visit, I want the app to tell me when a required sensor reading is missing or stale so that I can quickly resolve it and keep the visit compliant."
Description

Continuously track last-seen timestamps for each expected patient sensor (vitals, activity, medication dispenser) and compare against configurable freshness thresholds tied to the patient’s care plan and scheduled visit window. Classify each sensor-task as expected, received, stale, or missing, and update state in near real time by subscribing to the IoT ingestion stream. Support per-device, per-patient overrides, and suppress checks when devices are intentionally paused or not assigned for the visit. Expose detection state to the mobile app and API, and persist an auditable event log for compliance reporting.

Acceptance Criteria
Real-Time Staleness Classification During Visit Window
Given a patient has a scheduled visit window and a care plan listing expected sensors with freshness thresholds And the system is subscribed to the IoT ingestion stream When a new reading arrives for an expected sensor Then the sensor-task state becomes "received" within 5 seconds and last_seen_at is updated to the event timestamp Given an expected sensor has not produced a reading within its freshness threshold during the visit window When the threshold elapses Then the sensor-task state becomes "stale" no later than 60 seconds after the threshold time Given the visit window ends and no reading was received for an expected sensor When classification runs at window close Then the sensor-task state becomes "missing" within 60 seconds Given classification is active during the visit window When time advances Then staleness checks run at least every 60 seconds per sensor
Per-Device Overrides and Visit-Based Suppression
Given a sensor is marked as paused for the patient or not assigned for the current visit When classification runs Then the sensor is treated as not expected and no stale/missing state or nudges are produced And API/mobile reflect expected=false and state="not_applicable" Given a per-device or per-patient threshold override exists for a sensor When classification runs Then the override value supersedes the default care plan threshold for that patient/device Given suppression is lifted or assignment is added mid-visit When classification runs Then expected=true and state is recalculated within 60 seconds using the current last_seen_at
Auditable Event Log of Detection State Changes
Given any sensor-task detection state changes or is suppressed/unsuppressed When the change is committed Then an immutable audit event is appended containing: tenant_id, patient_id, visit_id, device_id, sensor_type, previous_state, new_state, reason_code, threshold_ms, last_seen_at, occurred_at, processed_at, source (stream|backfill|override|suppression), actor_id (nullable), correlation_id Given audit events exist for a visit When queried via the audit API with patient_id and visit_id Then results are returned in chronological order by occurred_at with pagination and are filterable by device_id and state Given an audit event has been written When an attempt is made to update or delete it Then the operation is rejected and a new correcting event must be appended instead
API Exposure of Current Detection State
Given a client requests GET /patients/{patient_id}/visits/{visit_id}/sensor-detection-state When the visit is active or completed Then the response includes for each relevant sensor: device_id, sensor_type, expected (boolean), state (expected|received|stale|missing|not_applicable), last_seen_at (ISO8601), threshold_ms, paused (boolean), reason_if_suppressed (nullable) And the response is consistent with the latest classification with a maximum freshness lag of 5 seconds And p95 latency is ≤ 300 ms for up to 20 sensors at 100 requests/minute And ETag/If-None-Match is supported to return 304 when unchanged
Mobile App Receives and Displays Detection State Updates
Given a caregiver is viewing the patient’s active visit in the mobile app while online When a sensor-task state changes due to a new reading, threshold elapse, or suppression toggle Then the UI reflects the new state within 5 seconds without manual refresh and shows distinct indicators for received, stale, missing, and not applicable And an action sheet allows quick actions: re-pair, capture reading, or log reason when state is stale or missing Given the device is offline When the caregiver opens the visit Then the last known detection state is shown with an offline banner, and upon reconnect the state syncs within 10 seconds
Configurable Freshness Thresholds Per Care Plan and Visit
Given a care plan defines freshness thresholds per sensor type in minutes When a threshold is saved Then validation enforces allowed range 1–1440 minutes and units are stored in milliseconds internally And changes propagate to the classifier within 60 seconds and are used for subsequent calculations Given per-device, per-patient, or per-visit overrides are configured When classification runs Then the most specific override (visit > device/patient > care plan default) is applied Given visit windows may overlap time zone or DST changes When expected calculation runs Then patient’s configured time zone is used and expected periods are computed correctly across transitions
Stream Subscription and Backfill Reliability
Given the system is subscribed to the IoT ingestion stream for the patient’s device_ids When duplicate, late, or out-of-order readings arrive Then classification uses the reading with the greatest reading_timestamp per device and ignores older duplicates using idempotency keys or event_id Given the ingestion stream is unavailable When outage is detected Then the system marks ingestion status degraded and initiates backfill/polling within 2 minutes And upon recovery it replays missed events for at least the last 24 hours and reconciles states without emitting duplicate audit events Given backfill or replay completes When classification runs Then the final state matches what would have occurred with uninterrupted streaming
Contextual Caregiver Nudges
"As a caregiver, I want clear, minimal-tap prompts that tell me exactly what to do next when a sensor reading is missing so that I can fix it without breaking my workflow."
Description

Trigger real-time, non-intrusive in-app prompts when a required reading is stale or missing, offering single-tap options: Re-pair device, Capture reading now, Log reason, or Snooze. Tailor prompt text and actions by device type and associated care task, ensure accessibility and localization, and throttle to avoid notification fatigue. Respect active workflows (e.g., do not interrupt dictation), and provide a persistent notification center entry for later action. Capture analytics on prompt outcomes to inform product improvements.

Acceptance Criteria
Real-time Nudge for Stale Vitals Reading
Given a caregiver is viewing a patient visit with a required vitals reading and the latest reading age >= the configured staleness threshold for that task When Sensor Aware flags the reading as stale or missing Then an in-app nudge banner appears within 2 seconds with four single-tap actions: Re-pair device, Capture reading now, Log reason, Snooze And the banner displays the device type and task name And tapping Capture reading now opens the capture workflow in <= 2 seconds And tapping Re-pair device opens the pairing flow in <= 2 seconds And tapping Log reason opens the reason list and requires selection before saving And tapping Snooze suppresses nudges for this task/device for 10 minutes and records the snooze timestamp
Device- and Task-Tailored Prompt Content
Given the device type and associated care task are known for the missing/stale reading When the nudge is displayed Then the prompt text references the device and task (e.g., "Reconnect pulse oximeter for Morning Vitals") And the set of actions matches the device capabilities: hide Capture reading now if capture is not supported; hide Re-pair device if no pairing exists; always show Log reason and Snooze And the primary action button label uses task-appropriate language (e.g., "Dispense dose now" for medication dispenser, "Capture BP now" for blood pressure) And the prompt includes an info link that opens task details in <= 2 seconds And content mappings exist and are QA-verified for all supported device types: blood pressure cuff, glucose meter, pulse oximeter, scale, activity tracker, medication dispenser
Non-Interruptive During Active Dictation
Given the caregiver is actively recording audio or dictating When a nudge would otherwise display Then do not show a modal or play sounds; queue the nudge as a silent banner And the recording is not paused or stopped And a subtle badge appears in the header within 1 second to indicate a pending nudge And once recording ends, the nudge banner displays within 1 second And analytics record that the nudge was deferred due to dictation
Nudge Throttling and Consolidation
Given multiple readings are stale or missing during a visit When evaluating nudges Then no more than 1 nudge per device is shown in any 10-minute window And no more than 3 nudges total are shown per patient per visit And multiple pending readings are consolidated into a single banner with stacked actions when triggered within a 60-second window And selecting Snooze on a consolidated banner snoozes all included tasks for 10 minutes And logging a reason for a task prevents further nudges for that task for the remainder of the visit
Accessible and Localized Prompt
Given the caregiver’s device accessibility and locale settings When the nudge banner displays Then it meets WCAG 2.1 AA contrast ratios and touch targets >= 44x44 dp And supports Dynamic Type up to 200% without truncating critical text or hiding actions And is fully screen-reader accessible with focus order: title -> message -> actions -> close And respects OS Reduce Motion by disabling banner slide animations And is localized for English and Spanish with correct pluralization and date/time formats And properly mirrors layout for right-to-left locales
Persistent Notification Center Entry
Given a nudge has been triggered and not resolved (no capture, no valid reason logged) When the user opens the in-app Notification Center Then a single entry exists per task/device with timestamp and status "Action Needed" And the entry persists across app restarts and offline mode And tapping an action from the entry performs the same flows as the banner And resolving the underlying task removes the entry within 5 seconds And duplicate nudges for the same task/device update the existing entry rather than creating a new one
Analytics on Prompt Outcomes
Given analytics collection is enabled When a nudge lifecycle progresses Then the following events are captured with timestamps and identifiers: prompt_shown, option_selected, action_started, action_completed, prompt_snoozed, prompt_deferred_due_to_activity And each event includes device_type, task_id, visit_id, caregiver_id (pseudonymized), online_status, and latency metrics And 95% or more of events generated on-device are successfully transmitted within 15 minutes when online, or within 24 hours after reconnecting And no free-text note content is included in analytics payloads And analytics can differentiate consolidated vs single-task nudges via an is_consolidated flag
Guided Re‑Pair Wizard
"As a caregiver, I want an easy guided flow to re-pair a patient’s sensor so that I can restore readings quickly without calling support."
Description

Provide a step-by-step pairing flow for supported Bluetooth and Wi‑Fi sensors that auto-detects known devices assigned to the patient, verifies identity via QR code/serial, handles OS permissions, and confirms stable connectivity. Implement retries with exponential backoff, vendor-specific pairing plugins, and a manual fallback when auto-discovery fails. Cache secure pairing tokens bound to patient and device, and log outcomes (success, timeout, failure reason) to the audit trail for defensibility and support.

Acceptance Criteria
Auto‑Detect Assigned Sensors
Given Patient P has devices assigned in CarePulse (e.g., D1 Bluetooth, D2 Wi‑Fi) When the caregiver opens the Guided Re‑Pair Wizard during an active visit Then the wizard performs concurrent Bluetooth LE and Wi‑Fi discovery using vendor plugins where available for up to 15 seconds And only discoverable devices assigned to Patient P are listed (Bluetooth within 5 meters; Wi‑Fi on same SSID/subnet) And each listed device shows model, modality (BT/Wi‑Fi), last 4 of serial/QR, and signal strength indicator And unassigned devices are not shown And if none are found, the wizard displays "No assigned devices found" with options: "Try Again" and "Use Manual Fallback"
Device Identity Verification via QR/Serial
Given a device is selected from the discovery list When the caregiver scans the device QR code using the in‑app scanner or enters the serial number manually Then the app validates the code against the selected device record and Patient P’s assignment And on mismatch, pairing is blocked and an error "Device does not match patient assignment" is shown with options to Rescan or Select Another Device And after 3 failed scan attempts, manual serial entry is enabled with format validation (alphanumeric, 6+ chars) And on successful validation, the device is marked verified and the flow advances to permissions
OS Permission Handling for Pairing
Given the wizard requires Bluetooth, Location/Nearby Devices (per OS), Wi‑Fi, and Camera access When any required permission is missing or denied Then the app shows system prompts with clear rationale and cannot proceed to pairing until required permissions are granted or the user selects Manual Fallback And if a permission is permanently denied, the app deep‑links to Settings and detects return status And the final permission states (allowed/denied) are captured for audit
Connectivity Stability Verification and Sample Reading
Given device identity is verified and required permissions are granted When the app initiates pairing/association using the appropriate vendor plugin/protocol Then a connection is established within 30 seconds or a specific failure is surfaced And the connection remains stable for at least 20 consecutive seconds without disconnect events And at least one valid data packet/reading is received and passes checksum/format validation within the stability window And the UI shows "Connected" with timestamp and device status upon success
Retries with Exponential Backoff and Fallback Trigger
Given a pairing attempt fails due to timeout or transient error When the wizard retries automatically Then it performs up to 4 retries with exponential backoff delays of 1s, 2s, 4s, and 8s And the UI displays progress and remaining attempts, with a Cancel control to stop retries And if all retries fail or Cancel is pressed, the wizard presents the Manual Fallback, preserving the last failure reason for audit
Secure Token Caching Bound to Patient and Device
Given pairing succeeds and a credential/token is issued by the device/vendor SDK When storing credentials Then the token is encrypted at rest and bound to patientId+deviceId, and is reused for subsequent sessions for the same pair without re‑verification And the token is never reused across different patients And the token is invalidated on device unassignment, patient discharge, or 30 days of inactivity, requiring re‑verification
Audit Trail Logging of Pairing Outcomes
Given any pairing session ends (success, timeout, or failure) When the session completes Then an immutable audit event is recorded with: timestamp (UTC ISO‑8601), caregiverId, patientId, deviceId, outcome (success|timeout|failure), failureReasonCode, retryCount, durationMs, and permission states And the event appears in the audit trail UI within 5 seconds and is included in the standard export report
Structured Exception Logging
"As a caregiver, I want to log an acceptable reason when I can’t capture a reading so that my visit remains compliant and defensible."
Description

Enable caregivers to record standardized reasons when a reading cannot be captured, using agency-configurable reason codes (e.g., patient refused, device lost, battery dead, not clinically indicated). Enforce required evidence by code (voice note, photo, free text), auto-stamp entries with time, visit, user, and device, and validate that a logged exception fulfills the care-plan requirement for compliance. Store exceptions for inclusion in audit-ready reports and for export via API.

Acceptance Criteria
Reason Code Selection and Exception Submission
Given an active visit with a missing expected reading and active agency reason codes exist When the caregiver selects "Log Exception" from the Sensor Aware prompt or the Readings screen Then the app displays only active agency-configured reason codes in the agency-defined order And when the caregiver selects a reason code and required evidence (per code) is satisfied Then the caregiver can submit the exception without error And the system creates one exception record linked to the visit, patient, reading type, and reason code And the reading requirement displays status "Exception logged" with the selected reason short label
Evidence Enforcement by Reason Code
Given a reason code requires specific evidence types (e.g., voice note and free text) When the caregiver attempts to submit without providing all required evidence types Then the submit action is blocked and inline validation messages identify each missing evidence type And when the caregiver provides all required evidence Then the submit action becomes enabled And upon submission, the evidence items are stored and retrievable with the exception record, each labeled by type
Automatic Metadata Stamping on Exceptions
Given a caregiver submits an exception Then the exception record is auto-stamped with: visit_id, patient_id, caregiver_user_id, reading_type, reason_code_id, reason_code_label, created_at (ISO 8601 with timezone), mobile_device_id, and iot_device_id when known And the created_at reflects the actual submission time as recorded on the device and normalized to UTC with timezone offset retained And the record includes a unique exception_id for cross-referencing in reports and API
Compliance Validation Against Care Plan
Given the care plan defines whether exceptions are allowed for a reading type and which reason codes qualify When an exception is submitted for the reading type Then the system validates the reason code and presence of required evidence against the care plan rules And if permitted, the reading requirement is marked "Fulfilled by Exception" and included as compliant in visit metrics And if not permitted, the exception is saved as "Non-compliant" and the reading requirement remains unfulfilled, with a specific message shown to the caregiver And only one compliant fulfillment (measurement or exception) is allowed per requirement per visit; subsequent submissions are blocked with an explanation
Inclusion in Audit-Ready Reports
Given an authorized user generates an audit-ready report for a date range and filters (e.g., patient, caregiver, agency) When the report includes visits with logged exceptions Then each exception appears with: visit date/time, patient, caregiver, reading type, reason code label, created_at, compliance outcome (compliant/non-compliant), and indicators/links for attached evidence And the report totals reflect exceptions that fulfilled requirements toward compliance metrics
API Export of Exception Records
Given an authenticated API client requests exception records via the Exceptions export endpoint with filters When the client supplies filters (date range, patient_id, visit_id, reading_type, reason_code_id, compliance outcome) Then the API responds 200 with a JSON payload containing exception_id, visit_id, patient_id, caregiver_user_id, reading_type, reason_code_id, reason_code_label, created_at, evidence metadata (type, size), and secure evidence URLs/tokens And the endpoint supports pagination via limit and cursor/next_token and returns next_token when more records remain And evidence binaries are not inlined; access is provided via time-limited URLs/tokens And unauthorized or invalid requests return 401/400 with error details
Nudge-to-Exception Flow for Missing/Stale Readings
Given a missing or stale reading is detected for an in-progress visit When the caregiver taps the Sensor Aware nudge and chooses "Log Exception" Then the exception form is pre-populated with the current visit and the specific reading type (and iot_device_id if known) And after successful submission, the nudge is cleared and the reading requirement reflects "Fulfilled by Exception" if compliant And if non-compliant, the nudge remains or is replaced with guidance to capture the reading or select an allowed reason
Auto-Populated Clinical Notes & Audit Links
"As a caregiver, I want the app to auto-fill my notes from sensor data or my logged reason so that I spend less time documenting and avoid errors."
Description

When readings are captured or exceptions logged, automatically populate visit notes with structured data (values, units, device ID, method, exception code/details) and link entries to the underlying sensor records and pairing events. Lock fields upon submission while maintaining edit history and signatures for traceability. Accept optional voice clip transcription as supplemental context. Reduce duplicate data entry and ensure notes are clinically complete and aligned to compliance requirements.

Acceptance Criteria
Auto-Populate Structured Note from Sensor Reading
Given a caregiver is on an active visit with a paired sensor When a valid sensor reading for a required metric is received by CarePulse Then within 5 seconds a new structured note entry is added containing measurement type, value, units, device ID, method="sensor", and reading timestamp And the entry is tagged "Auto-populated" and attributed to the capturing user/device And no manual data entry is required to add these fields
Auto-Populate Exception for Missing or Stale Reading
Given a required metric lacks a current reading per the care plan time window When the caregiver logs an exception by selecting an exception code and optional details Then a structured exception note entry is added containing exception code, details, targeted metric, intended time window, method="exception", and timestamp And the entry links to the expected schedule item ID for that metric And the system records the pairing status at the time of exception
Linked Sensor Record and Pairing Event Deep Links
Given a note entry was auto-populated from a sensor reading When the user taps the "Sensor Record" link Then the app opens the associated sensor record within 2 seconds showing the raw reading ID and immutable metadata And when the user taps the "Pairing Event" link Then the most recent pairing event record opens showing device ID, timestamp, and pairing user And both links persist as stable IDs/URLs in exported audit views
Submission Locking, Versioned Edit History, and Signatures
Given a caregiver completes and signs the visit note When the note is submitted Then all auto-populated and exception fields become read-only And a versioned snapshot is created capturing field values, timestamps, submitting user, and signature And any subsequent edit creates a new version, logs before/after values, requires re-authentication, and invalidates prior signature until re-signed And all versions remain accessible in the audit history
Voice Clip Transcription as Supplemental Context
Given a caregiver records an optional voice clip tied to a metric or note entry When transcription completes Then the transcript is appended to the targeted note entry labeled "Transcript" with timestamp and caregiver attribution And a link to the original audio is available And the transcript does not modify structured fields and is excluded from completeness checks And for clips ≤ 60 seconds, transcription completes within 60 seconds or a non-blocking retry option is shown
Duplicate Prevention and Clinical Completeness Enforcement
Given required metrics for the visit are defined in the care plan When multiple readings or exceptions for the same metric and time window exist Then the note merges duplicates into a single visible entry keeping the most recent value while retaining prior entries in history And the note shows a completeness indicator that turns "Compliant" only when each required metric has a valid reading (method="sensor" or verified manual) or a logged exception And attempting submission while incomplete blocks submission and lists missing items with one-tap actions to capture a reading or log an exception And the final compliance status is stored with the submitted note and exposed via the reporting API
Escalation & Reporting of Persistent Gaps
"As an operations manager, I want to be alerted when sensor gaps persist so that I can intervene and protect care quality and compliance."
Description

Identify repeated or prolonged sensor data gaps across visits or monitoring windows and escalate based on configurable rules (thresholds, durations, patient cohorts). Notify operations managers via push/email and surface issues on the Compliance dashboard with trend charts and root-cause indicators (e.g., frequent pairing failures). Provide drill-down from patient to visit-level events and include these gaps and resolutions in one-click, audit-ready reports and exports.

Acceptance Criteria
Persistent Sensor Gap Detection Across Visits
Given a rule with threshold=2 gaps within 7 days OR any single gap >60 minutes and cohort=CHF When sensor readings are missing relative to the patient’s care plan schedule Then the system creates a Persistent Gap event with: patient_id, cohort, rule_id, window_start, window_end, gap_count, max_gap_duration, sensors_affected, first_seen_at, last_seen_at And the event is generated no more than once per rolling 24 hours per patient per rule unless severity increases And cross-visit gaps are aggregated within the rule window And patient time zone is applied; all time calculations within ±1 minute tolerance And the event status initializes to "Open"
Escalation Notifications and Dedupe
Given an Open Persistent Gap event with severity=High When the event is created Then send a push notification to the assigned Operations Manager within 60 seconds and an email within 2 minutes with subject "[CarePulse] High Gap for {patient_name}" And include a deep link to the patient gap detail, rule summary, and last 3 pairing attempts And if the event remains Open after 2 hours, notify Tier 2; after 24 hours, notify Administrator And deduplicate notifications so no more than 1 push and 1 email per event per recipient in a 4-hour cool-down unless severity increases or status changes And respect quiet hours 22:00–06:00 local by deferring non-critical notifications and marking them "Deferred"
Compliance Dashboard Trends and Root-Cause Indicators
Given at least one Persistent Gap event exists in the selected date range When the Operations Manager opens the Compliance dashboard Then display a Sensor Gaps card with: patients_affected, open_events, avg_time_to_resolution, and 7/30-day trend sparkline And show a sortable table with columns: patient, cohort, open_gaps, avg_gap_duration, last_gap_at, root_cause And derive root_cause as: "Pairing Failure" if ≥3 failed pairing attempts in last 7 days; "Low Battery" if sensor battery <15% during any gap; "No Dispenser Use" if no dispense events ≥24h; else "Unknown" And clicking a patient row highlights their trend in the chart And dashboard metrics refresh within 5 seconds of data update
Patient-to-Visit Drill-Down Navigation
Given a patient row on the Compliance dashboard When the user selects the patient Then navigate to patient detail showing a timeline of gaps grouped by visit and sensor with filters for date range, sensor type, and severity And selecting a gap opens the visit-level event with: expected_read_time, last_successful_read_at, gap_started_at, gap_ended_at (if resolved), rule_applied, caregiver_notes, resolution_code, actions_log And back navigation returns to the same filtered and sorted state on the dashboard And each page loads in <2 seconds on 4G with backend p95 ≤200 ms
Audit-Ready Reports and CSV Exports
Given a date range and facility/cohort filters When the user clicks One-Click Audit Report Then generate a PDF containing: executive summary, counts by severity, list of open/closed events with patient identifiers, visit dates, durations, rule references, root_cause, resolution codes, and timestamps (created, acknowledged, resolved) And include an appendix with event audit trail entries (who, when, what) and current rule configurations And generate a CSV export with one row per event, header row, delimiter=",", ISO 8601 timestamps with offset, columns per data dictionary v1.2 And complete report generation within 30 seconds for up to 5,000 events; if >15 seconds, show progress and send download link via email And store generated artifacts immutably for 365 days with access-controlled retrieval
Rule Configuration by Cohort with Audit Log
Given the Rules settings page When an admin creates or edits an escalation rule Then allow scoping by cohort (e.g., CHF, COPD, Post-op), sensor type(s), thresholds (count), durations (minutes), window (hours/days), severity tiers, notification tiers, and quiet hours And validate to prevent conflicting rules for the same cohort+sensor+window; show inline conflict messages with guidance And require reason-for-change; apply changes prospectively only; do not alter existing events retroactively And write each change to an immutable audit log: user_id, timestamp, old_value, new_value, reason, ip, correlation_id And provide a Test Rule function that, given a patient and date range, returns trigger decision and sample metrics in <5 seconds
Offline Capture & Deferred Sync
"As a caregiver working in low-connectivity areas, I want the app to handle sensor actions offline so that I can complete visits without delays."
Description

Allow caregivers to pair devices, capture readings, and log exceptions while offline by securely queuing actions and data locally. Record device and app timestamps, reconcile upon reconnect using server time, and deduplicate/resolve conflicts. Provide clear sync status and error recovery in the app, and re-run compliance checks after sync to update visit state and downstream reports without requiring user re-entry.

Acceptance Criteria
Offline Device Pairing and Reading Capture
- Given the device has no internet connectivity, When a caregiver pairs a supported sensor via Bluetooth, Then the app confirms pairing succeeds offline and records device_id, firmware_version, and pairing_timestamp in the local queue. - Given offline mode, When a caregiver captures a vitals/medication/activity reading, Then the reading is saved to the local queue with patient_id, visit_id, measurement_type, value, units, device_timestamp (if available), app_local_timestamp, and origin=sensor. - Given offline mode and a missing sensor reading, When the caregiver logs an exception reason, Then the exception entry is saved to the local queue with reason_code, optional free_text, and origin=manual. - Given the app is force-closed or the device is rebooted while offline, When reopened, Then all previously queued pairings/readings/exceptions remain intact and visible in the visit timeline.
Local Secure Queue Persistence and Limits
- Given the app is installed on a supported device, When offline items are queued, Then they are stored encrypted at rest using device secure storage and are inaccessible to other apps/processes. - Given sustained offline operation, When up to 500 items per user are queued, Then the app continues to accept new items without data loss and displays remaining capacity when <10% remains. - Given storage pressure, When the queue nears capacity, Then the app warns the user and provides guidance to connect and sync or clear space, without blocking further care documentation until full. - Given the user signs out while offline, When sign-out is confirmed, Then queued items remain bound to that user account and cannot be accessed by other users on the same device.
Deferred Sync Trigger and Ordering on Reconnect
- Given there are queued items, When network connectivity becomes available or the app returns to foreground with connectivity, Then sync begins within 10 seconds. - Given multiple queued items across visits, When syncing, Then items are transmitted in FIFO order per visit and per device, while allowing parallel upload across different visits. - Given transient network errors, When sync fails, Then the app retries automatically with exponential backoff up to 5 attempts and preserves ordering; a manual Retry Now triggers an immediate attempt. - Given sync completes, When all items are acknowledged by the server, Then their local state transitions to Synced and they are removed from the queue.
Timestamp Recording and Server-Time Reconciliation
- Given any queued item, When created offline, Then the app stores device_timestamp (if provided), app_local_timestamp, and timezone_offset. - Given first successful reconnect, When the app contacts the server, Then it records server_time_offset and applies it to derive a canonical_event_time for each item. - Given device clock skew > 2 minutes detected, When reconciling, Then the item is flagged with time_skew=true and appears with a warning tag in audit views but remains usable for compliance. - Given canonical times are computed, When events are posted, Then the server persists canonical_event_time and uses it for ordering and reporting.
Deduplication and Conflict Resolution Rules
- Given a new reading arrives during sync, When the server detects an existing reading with the same patient_id, device_id, measurement_type and canonical_event_time within ±5 seconds and equal value/units, Then the incoming item is treated as a duplicate and not double-counted; the local item is marked Duplicated. - Given two items overlap but values differ, When conflict resolution runs, Then origin priority is applied: sensor > manual > edited; if same origin, the item with latest server_received_time wins; the other is marked Superseded with a link to the winner. - Given a manual exception exists for a time window and a valid sensor reading later arrives for the same window, When reconciling, Then the exception is auto-closed with reason=ReplacedByReading and retained in the audit trail.
User-visible Sync Status and Error Recovery
- Given the user is offline, When viewing a visit, Then an Offline badge shows and per-item chips display statuses: Queued, Syncing, Synced, Error. - Given an item fails to sync, When the user taps the Error status, Then a detail sheet shows the specific error code, human-readable cause, and recommended action (e.g., re-pair, capture again), with options Retry and Resolve Later. - Given the user initiates Retry, When the underlying issue is resolved, Then the item transitions to Synced without duplicating data. - Given any sync activity, When the user opens the Sync Center, Then a Last Synced timestamp (server time) and counts of Pending/Failed items are displayed and update in real time.
Post-Sync Compliance Re-evaluation and Report Updates
- Given a visit with queued items, When sync completes, Then the compliance engine re-runs within 30 seconds and updates visit status (Compliant, At Risk, Non-compliant) without requiring any re-entry by the caregiver. - Given compliance status changes, When re-evaluation completes, Then downstream reports and dashboards reflect the new status within 1 minute, and an audit entry is recorded with before/after states and driver events. - Given the caregiver is in the visit view, When compliance changes post-sync, Then the UI notifies the caregiver of the updated state and any remaining required actions, with direct links to complete them.

Hands‑Free Coach

Delivers 20–30 second audio micro‑lessons with optional call‑and‑repeat prompts. Caregivers can keep gloves on and eyes on the client while learning the exact phrasing or steps that satisfy payer checks and agency policy.

Requirements

Hands‑Free Voice Activation
"As a caregiver, I want to start micro-lessons with my voice so that I can keep gloves on and maintain focus on my client."
Description

Enable wake-word and tap-to-talk controls to start and control 20–30 second micro-lessons without touching the screen. Implement on-device hotword detection with noise suppression, configurable push-to-talk fallback, and clear audio cues (beep/haptic) for start/stop to support gloved workflows. Integrate with iOS/Android audio focus, microphone permissions, and Bluetooth headsets (including bone-conduction) to ensure reliable playback in clinical environments. Provide privacy safeguards: hotword processed on-device, no continuous audio streaming, and auto-timeout after inactivity. Expose settings for sensitivity, headset preference, and auto-resume after interruptions (calls, alarms).

Acceptance Criteria
Wake‑Word Activation in Noisy Clinical Room
Given hotword detection is enabled and microphone permission is granted, and ambient noise is between 65–75 dBA with noise suppression on When the caregiver says the configured wake‑word within 1 meter of the device without touching the screen Then the wake‑word is detected on‑device and a start cue is emitted within 300 ms And when the caregiver issues the mapped play command within 3 seconds of the start cue Then the requested 20–30 second micro‑lesson begins playback within 1 second And across 20 trials at 65–75 dBA, wake‑word detection success rate is ≥ 95% and false activations are ≤ 1 per hour
Push‑to‑Talk Fallback with Gloves
Given hotword is disabled or set to Low sensitivity When the caregiver long‑presses the configured push‑to‑talk control (hardware button or on‑screen) while wearing gloves Then a start cue is emitted within 200 ms and speech capture begins, without requiring a precise tap (on‑screen target ≥ 48×48 dp) And releasing the control or saying "cancel" stops listening within 200 ms and a stop cue is emitted And if microphone permission is not granted, the first press triggers the OS permission prompt; if permanently denied, a non‑blocking banner with "Open Settings" is shown and push‑to‑talk remains disabled And in 10 consecutive trials with gloves, all activations succeed without screen navigation
Start/Stop Audio and Haptic Cues
Given the device supports haptics and volume is not muted for haptics When listening starts via wake‑word or push‑to‑talk Then a single beep and short haptic occur within 300 ms And when playback begins, a distinct double‑beep is emitted; when listening or playback ends, a stop cue is emitted And cues respect system Do Not Disturb and volume settings (audio cues suppressed under DND; haptics still fire) And cue latency (event to cue) is ≤ 300 ms at the 95th percentile across 50 actions
Audio Focus, Interruptions, and Auto‑Resume
Given a micro‑lesson is playing and the OS issues a transient audio focus loss (phone call, alarm, navigation prompt) When focus is lost Then the lesson pauses or ducks per OS policy within 300 ms And if Auto‑Resume is enabled, when focus is regained, playback resumes within 1 second at the previous timestamp; if disabled, playback remains paused and the assistant announces "say resume to continue" And background music from other apps is ducked during playback and restored afterward And audio focus is released within 500 ms after lesson completion
Bluetooth and Bone‑Conduction Headset Support
Given a supported Bluetooth headset (including bone‑conduction) is connected and selected in Headset Preference When a micro‑lesson starts Then playback and microphone route to the headset within 1 second and start without errors And if the headset disconnects mid‑lesson, playback switches to device speaker within 1 second and a route‑change cue is emitted And wake‑word and push‑to‑talk via the headset mic achieve ≥ 95% detection success over 20 trials at 60–70 dBA And the user can switch route back to handset within 2 taps without restarting the app
On‑Device Processing and Privacy Safeguards
Given the app is idle with hotword enabled Then no audio is streamed off‑device; over a 30‑minute idle test, network inspection shows zero audio payloads transmitted And hotword inference runs on‑device and functions with network disabled (airplane mode) for at least 30 minutes And an on‑screen indicator displays when the microphone is actively listening And listening auto‑times out within 10 seconds of last detected speech; if no command is received, listening stops and a stop cue plays And playback stops after 30 seconds of inactivity (no further commands)
Configurable Settings and Persistence
Given a user changes Hotword Sensitivity (Low/Medium/High) Then the setting persists across app restarts And in a quiet room (≤ 45 dBA), maximum activation distance increases by ≥ 1 meter from Low to High And changing Headset Preference immediately routes subsequent listening/playback accordingly without app restart And toggling Auto‑Resume takes effect on the next interruption event And all settings are accessible within 2 taps under Hands‑Free Voice Activation
Contextual Coaching Triggers
"As a caregiver, I want the app to surface the right micro-lesson based on the visit and payer so that I don’t have to search while on-site."
Description

Surface the most relevant micro-lesson automatically based on visit context (payer, visit type, care plan tasks, diagnosis codes, and current workflow step) and caregiver location/arrival status. Define a rules engine that maps schedule/EMR metadata and payer policy tags to specific lessons, with configurable priorities and fallbacks. Support voice search (“play wound care phrasing”) when no rule matches. Provide an SDK hook so scheduling and routing can request a lesson at step transitions (e.g., check-in, vitals, ADLs). Log trigger source and selection rationale for transparency and tuning.

Acceptance Criteria
Auto-Trigger at Check-In Based on Visit Context
Given a scheduled visit has payer Medicare_A, visitType Skilled_Nursing, carePlanTasks include Wound_Care, diagnosisCodes include L97.909, workflowStep Check-In, and caregiver arrivalStatus Arrived, and a ruleset contains rule R1 mapping this context to lesson L1 with priority 10 When the caregiver initiates Check-In in the mobile app Then the rules engine evaluates the visit context and selects rule R1 And the system surfaces lesson L1 to the caregiver without manual navigation And audio playback starts within 1.5 seconds on a connection >= 5 Mbps and within 3.0 seconds on a connection <= 1 Mbps And the triggerSource is recorded as context_rule with ruleId R1 and a contextSnapshotId is present
Priority and Fallback Resolution Across Multiple Matching Rules
Given rules R1(priority 5), R2(priority 5), and R3(priority 10) all match the visit context When the rules engine resolves the selection Then exactly one lesson is selected And the selected rule is the one with highest priority (lowest number) And if priorities tie, the rule with greater specificity (more matching attributes) is selected And if specificity ties, the most recently updated rule is selected And if no rule matches, the configured fallback rule for the payer is selected; if none, the global default rule is selected And the selectionRationale includes evaluated ruleIds, priorities, specificity scores, tie-breaker applied, and chosen ruleId
Voice Search Fallback When No Rule Matches
Given no rule matches the current context and microphone permission is granted When the caregiver says Play wound care phrasing Then the system performs speech recognition and searches lessons And the top result has confidence >= 0.70; if < 0.70, the system asks a single clarifying question before selecting And the selected lesson begins playback within 2.0 seconds after recognition completes And the triggerSource is recorded as voice_search with captured transcript and top3 candidate lessonIds and scores And no touch interaction is required to complete the selection and start playback
SDK Hook Provides Lesson on Step Transition
Given the mobile app calls HandsFreeCoach.requestLesson with step Vitals and a valid visitContext When the SDK receives the call at a step transition Then the SDK returns a response within 500 ms containing either selectedLessonId or reason no_match And for identical inputs within 5 seconds, the same selectedLessonId is returned (idempotent) And the SDK emits a lessonRequested event with a correlationId that links to trigger logs And API version v1 is included in the response; unknown versions are rejected with error code unsupported_version
Location and Arrival Status Gating
Given the caregiver device is outside the visit geofence (>100 m) or arrivalStatus is not Arrived When a workflow step transition occurs Then no lesson is auto-played And the system may present an optional prompt to start voice search instead of auto-play And when the device enters the geofence (<=100 m) or arrivalStatus changes to Arrived, the next step transition evaluates rules and may auto-play per configuration And the gatingDecision and distanceMeters are recorded in the trigger log
Transparent Audit Logging of Trigger Decisions
Given any lesson selection attempt (rule-based, voice search, or SDK request) When the selection is processed Then a log entry is created within 5 seconds containing: eventId, timestamp UTC, userId, visitId, workflowStep, triggerSource, selectedLessonId or null, matchedRuleIds[], priorities, specificity scores, tie-breaker applied, contextSnapshotId, gatingDecision, networkType, latencyMs from step event to playback start, and errorCode if any And the log entry is retrievable via Admin API GET /v1/trigger-logs?visitId={id} within 60 seconds of the event And PHI and PII are limited to coded identifiers; any ASR transcript is stored with entity redaction per policy And 99% of logs over a rolling 24h period meet the 60-second availability SLO
Rules Engine Configuration via Admin API
Given an admin has API credentials with scope coach.rules:write When the admin creates or updates a rule via POST or PUT /v1/coach/rules with fields {ruleId, match {payer, visitType, carePlanTasks[], diagnosisCodes[], workflowStep}, priority, lessonId, fallbackScopes[]} Then the rule is validated (required fields present; priority is integer 0..100; lessonId exists) and persisted And the new or updated rule is active for evaluations within 1 minute And conflicts are detected; if two rules have identical match criteria, the request is rejected with 409 and guidance And rule changes are versioned; previous versions remain queryable via GET /v1/coach/rules/{ruleId}?version={n}
Prompt Repeat Verification
"As a caregiver, I want to confirm I’ve said the required phrase correctly so that I am confident my documentation meets payer checks."
Description

Offer optional call-and-repeat prompts that ask caregivers to repeat key phrases or steps. Implement lightweight on-device voice activity detection and keyword/phoneme matching to confirm completion without storing raw audio. Provide immediate feedback (success/try again/skip) and configurable thresholds per lesson. Handle noisy environments with adaptive gain control and background noise modeling. Record only structured outcomes (e.g., repeated, confidence score, attempts, duration) for compliance without capturing PHI. Allow agencies to disable or require verification per payer policy.

Acceptance Criteria
On-Device Verification and No Raw Audio Storage
Given the caregiver initiates a call-and-repeat prompt, When verification runs, Then all VAD and keyword/phoneme matching execute entirely on the device and function without network connectivity. Given any network connection is present, When verification is running, Then no raw audio, spectrograms, or intermediate acoustic features are transmitted off-device. Given verification completes, When buffers are finalized, Then raw audio is not written to disk and in-memory audio buffers are zeroed within 5 seconds of result. Given application logging is enabled, When verification occurs, Then logs contain no raw audio, phonetic strings, or verbatim transcripts.
Immediate Feedback and Per-Lesson Thresholds
Given a lesson with threshold T in [0.60, 0.95] and default T=0.75, When the caregiver finishes speaking (VAD end-of-speech), Then success/try again/skip feedback is presented within 800 ms in 95% of prompts and within 1.5 s worst-case. Given a computed confidence C, When C >= T, Then the result is marked success; otherwise try again. Given per-lesson settings, When an admin updates T, Then the new T applies to the next prompt without app restart. Given a max_attempts setting (default 3), When attempts reach max_attempts without success, Then the system offers skip and proceeds according to policy.
Noise Robustness and Adaptive Gain Control
Given ambient noise levels between 65–80 dBA or SNR ≥ -3 dB, When the caregiver repeats the canonical phrase, Then true positive rate ≥ 90% and false accept rate ≤ 3% across the reference test set. Given a transient noise spike ≥ 85 dBA during capture, When detection is affected, Then the prompt displays a "too noisy—try again" message and does not mark success. Given the caregiver is 0.3–1.0 m from the microphone, When speaking at normal volume, Then adaptive gain control prevents clipping (peak < -1 dBFS) and maintains average RMS between -26 and -16 dBFS. Given background noise modeling, When the environment changes mid-session, Then the noise profile updates within 2 seconds without requiring user input.
Structured Outcome Recording Without PHI
Given a prompt attempt ends, When the result is determined, Then only these fields are persisted and synced: lesson_id, prompt_id, verified (boolean), confidence (0–1), attempts_count, speech_duration_ms, feedback_latency_ms, timestamp, and configured device/user identifiers. Given persistence and sync, When records are created, Then no raw audio, no partial/full transcripts, and no patient/client identifiers derived from speech are stored. Given an audit export is requested, When data is retrieved, Then it contains only the structured outcome fields and is sufficient to prove completion without PHI.
Policy-Based Enable/Require/Disable Controls
Given an admin sets verification policy at agency, payer, or lesson level to required, When a caregiver reaches a prompt, Then skip is hidden/disabled and progression is blocked until success or admin override. Given policy is optional, When a caregiver reaches a prompt, Then verification runs and skip is available without admin intervention. Given policy is disabled, When the lesson runs, Then no call-and-repeat prompt is presented, no audio is processed, and no verification outcome is recorded. Given policy changes are saved, When a device syncs, Then the new policy takes effect within 5 minutes and is applied offline thereafter.
Hands-Free Skip and Retry via Voice
Given verification is optional, When the caregiver says "skip" within 2 seconds of the feedback prompt, Then the attempt is recorded as not verified and the lesson proceeds. Given a try-again outcome, When the caregiver says "repeat" or "again," Then the prompt replays and a new attempt starts. Given gloves-on use, When the caregiver cannot tap the screen, Then all actions (start, repeat, skip) are available via voice commands with ≥ 95% recognition accuracy in the reference test set. Given max_attempts is reached, When the caregiver does not issue a command within 5 seconds, Then the UI presents skip (if optional) or replay (if required) automatically.
Policy‑Mapped Lesson CMS
"As a compliance manager, I want to author and publish 20–30 second lessons mapped to policies so that caregivers are coached on the exact wording required."
Description

Deliver an admin console to author, upload, and manage 20–30 second audio micro-lessons with enforced duration limits, optional scripts, and multi-language variants. Tag lessons by payer policy, procedure/task, diagnosis, and care setting; map each lesson to contextual trigger rules. Support recording, TTS synthesis, waveform trimming, loudness normalization, and versioning with approval workflow (draft→review→published) and rollback. Include effective/expiry dates, changelogs, and impact analysis (where used). Provide permissions (Admin, Compliance Reviewer) and audit logs for every publish action.

Acceptance Criteria
Publishing blocks audio outside 20–30 seconds
- Given an admin uploads or records lesson audio, when the measured duration is <20.0s or >30.0s, then save and publish actions are blocked with the message "Audio must be 20–30 seconds". - Given TTS synthesis is requested, when the projected duration at the selected voice/rate would exceed 30.0s or be under 20.0s, then generation is disallowed and the UI offers rate adjustment to target 20–30s. - When trimming is applied, then the updated duration is recalculated in real time and save is enabled only when the result is within 20.0–30.0s inclusive. - On publish, a server-side validation confirms stored audio duration is within 20.0–30.0s inclusive; failures prevent publish and are logged.
Manage multi-language lesson variants with optional scripts
- Given a base lesson, an admin can add variants tagged with ISO 639-1 language codes (e.g., en, es, fr). - For each language variant, the script field is optional; publish is allowed with audio only. - Each language variant can attach either recorded audio or TTS-generated audio; at least one active language variant is required to publish the lesson. - Tags and trigger rules are shared across variants at the lesson level; editing audio/script is per-variant. - When a lesson is published, the system indicates which languages are available and flags any missing audio for configured target locales. - Variant management supports enable/disable per language without affecting other variants.
Mandatory policy and clinical tags before publish
- Before publish, the lesson must have at least one tag in each required category: payer policy, procedure/task, diagnosis, and care setting; otherwise publish is blocked with a specific error listing missing categories. - Tags are selected from managed vocabularies; free-text entries are not allowed. - The admin console provides typeahead and multi-select for each category and displays the final tag set on the review screen. - Lessons can be filtered by any tag category in the CMS list, and filter results update in <500ms for datasets up to 5,000 lessons.
Map and validate contextual trigger rules with preview
- An admin can attach one or more trigger rules to a lesson using a rule builder supporting AND/OR, grouping, and operators (equals, contains any, in list) on fields including payer, procedure/task, diagnosis, care setting, visit type. - At least one valid trigger rule is required to publish. - The rule builder prevents saving syntactically invalid or unsatisfiable rules and displays inline validation errors. - A preview tool accepts a sample visit context JSON and returns "Would trigger" or "Would not trigger" with the matching rule ID(s) within 1s. - On publish, server-side validation re-evaluates rules; invalid references (e.g., deleted tag IDs) block publish with a clear error.
Record/TTS, trim waveform, and normalize loudness
- The CMS provides an in-browser recorder with waveform display and start/end trimming; trims are non-destructive and reversible. - TTS is available with at least 3 voices per supported language and supports rate adjustment ±20% with live preview. - All audio, whether recorded or TTS, is normalized to a target integrated loudness of −16 LUFS ±1 LU and true-peak ≤ −1.0 dBTP; normalization occurs automatically on save. - Post-processing completes in ≤10s for a 30s clip on a nominal network; progress is shown and errors are surfaced with retry. - After processing, playback level is consistent (per above targets) across lessons within ±1 LU when metered.
Versioning with approval workflow, role permissions, rollback, and audit
- Workflow states: draft → review → published; only Admins can create/edit drafts and submit for review; only Compliance Reviewers can approve to published or reject back to draft with required comments. - Published versions are immutable; creating changes spawns a new draft version. - Rollback allows selecting a prior published version, creating a new published version identical to it; history retains all versions with timestamps. - A changelog message is mandatory on submit for review, approve, reject, and rollback actions. - Audit logs capture every publish action with: user ID, role, timestamp (UTC), lesson ID, version ID, diff summary, effective/expiry dates, and impacted entities count; logs are read-only and exportable as CSV.
Effective/expiry dates, changelog, and impact analysis
- Effective start date is required at publish; expiry date is optional; the system enforces not-before effective and auto-unpublishes at expiry (if set). - The CMS warns owners 14 days before expiry via in-app notification when active trigger mappings exist. - A changelog entry (free text, 10–500 chars) is required for every version transition and is displayed in the version history. - Impact analysis is available before publish and lists where the lesson is used (trigger rules, curricula/collections, workflows) with counts and direct links; the publish confirmation screen displays this summary. - Overlap detection flags when another published lesson with identical trigger rules and overlapping effective windows exists; publish requires explicit override acknowledgement.
Offline Lesson Caching
"As a caregiver, I want lessons to play even without network coverage so that I can stay compliant in low-connectivity homes."
Description

Pre-fetch and cache all lessons likely needed for the day’s scheduled visits, including language variants and prompts, with automatic low-bitrate fallback for weak networks. Expose cache status per route and per visit, retry policies, and automatic cleanup after visit completion or 7 days of inactivity. Ensure seamless playback when offline and graceful degradation to text-only scripting if audio assets are missing. Respect device storage constraints with configurable quota and LRU eviction.

Acceptance Criteria
Auto Prefetch for Today’s Route (All Variants and Prompts)
Given the caregiver has a scheduled route today with N visits and M required audio assets (lessons, call-and-repeat prompts, and language variants) When the app is opened or receives the route and the device has any network connectivity Then prefetch begins within 2 minutes of route availability And at least 95% of M assets are cached before the first visit’s scheduled start time And for each lesson, both caregiver-preferred and client-specified language variants are cached when available And assets are organized per visit to enable per-visit readiness evaluation
Cache and Retry Status Visibility per Route/Visit
Given the caregiver opens the route summary screen When viewing the route Then each visit shows a cache status badge: Ready (100%), Partial (1–99%), or Missing (0%), plus “x/y assets” and last sync time And a route-level readiness percentage is displayed (weighted by per-visit assets) When the caregiver opens a visit’s details Then a list of missing assets is shown with type, size, and priority And retry policy info is displayed: attempt count, last error, next retry ETA (exponential backoff starting at 30s, doubling to max 5m, up to 6 attempts), and a “Retry now” action And low/high bitrate selection used per asset is indicated
Automatic Low‑Bitrate Fallback on Weak Networks
Given the device’s measured download bandwidth is <256 kbps, or RTT >1500 ms, or packet loss >5% When prefetching or fetching on-demand audio assets Then the system automatically selects and downloads the low‑bitrate variant (≤32 kbps) for remaining and new requests without user action And if both variants are cached, playback prefers the highest available; otherwise uses low‑bitrate And the chosen bitrate is reflected in cache status and logs And once network quality recovers for 2 consecutive checks within 5 minutes, high‑bitrate assets are scheduled for background upgrade without blocking playback
Offline Playback without Network Dependence
Given the device is offline (airplane mode) and the visit’s required audio assets are cached When the caregiver starts a lesson and navigates between steps and call‑and‑repeat prompts Then the first audio starts within 500 ms, with zero stalls >500 ms during playback And step navigation latency is ≤250 ms And no network requests are made during playback (verified via offline logs) And completion state and playhead position are persisted locally and sync on next connectivity
Graceful Degradation to Text‑Only When Audio Missing
Given a required audio asset is unavailable (not cached) and the device is offline or download fails When the caregiver opens the lesson Then the text‑only script and call‑and‑repeat prompts render within 300 ms with clear “Audio unavailable” indicator And audio controls are disabled except a “Retry download” action (shown only when online) And the app does not crash or block workflow, and an analytics event is logged with error code and asset ID
Configurable Storage Quota with LRU Eviction
Given the cache quota Q is configurable (default 500 MB, range 100–1000 MB) and free disk floor F is 500 MB (configurable) When new downloads would exceed Q or reduce free disk below F Then least‑recently‑used assets not needed for any visit in the next 24 hours are evicted until both limits are satisfied And assets for an active visit or any remaining visit today are never evicted And eviction and quota decisions are visible in diagnostics with timestamps and byte counts And total cache usage never exceeds Q and device free space never drops below F
Automatic Cleanup after Visit Completion or 7‑Day Inactivity
Given a visit is marked Completed When 10 minutes have elapsed and no follow‑up for the same client/lesson occurs in the next 24 hours Then all associated audio assets are purged from cache and indexes updated Given any cached asset has not been played and is not referenced by scheduled visits for 7 consecutive days When the daily maintenance job runs (e.g., 02:00 local time) Then the asset is deleted and space reclaimed And the route/visit cache readiness reflects cleanup on next sync
Audit‑Ready Coaching Logs
"As an operations manager, I want coaching activity logged to visit notes so that audit reports are complete and defensible."
Description

Automatically attach structured coaching events to the visit record: lesson ID/version, trigger source, timestamps, caregiver ID, playback completion, repeat verification outcome, and device/network state. Surface these in CarePulse’s one-click audit reports and export via CSV/JSON and API. Enforce HIPAA-aligned data minimization (no raw audio stored), role-based access, retention controls, and immutable audit trails with digital signatures. Provide dashboards for completion rates and gaps to inform policy updates and training needs.

Acceptance Criteria
Auto-attachment of structured coaching events to visit record
Given an in-progress visit with a caregiver using Hands‑Free Coach When a micro‑lesson is played or a call‑and‑repeat prompt is used Then the system attaches a coaching_event to the active visit containing: event_id (UUID), lesson_id, lesson_version, trigger_source ∈ {voice, tap, schedule, policy_alert, sensor_alert}, start_timestamp and end_timestamp (UTC ISO‑8601, ms precision), caregiver_id, playback_completion ∈ [0..100], repeat_verification_outcome ∈ {pass, fail, n/a}, device_state {battery_percent, os_version, app_version, device_model}, network_state {status ∈ {online, offline}, rtt_ms if online} and persists it within ≤1s of playback end. Given multiple coaching events occur in a visit When events are saved Then each has a unique event_id and is retrievable in chronological order by start_timestamp. Given the device is offline at event end When connectivity is restored Then the encrypted event is synced within ≤5 minutes with original timestamps preserved and a sync_timestamp recorded. Given timestamps are recorded When validated Then clock skew > 2 minutes is corrected using server receipt time while storing both device and server timestamps.
One-click audit report includes coaching logs
Given a completed visit with ≥1 coaching event and a user with Auditor role When One‑Click Audit Report is generated Then the report includes a Coaching Events section listing for each event: lesson_id/version, trigger_source, start/end timestamps, caregiver_id (masked per policy), playback_completion, repeat_verification_outcome, device/network state, and signature verification status, in chronological order, and renders in ≤5 seconds for ≤100 events. Given a visit has 0 coaching events When the report is opened Then the report displays "No coaching events recorded" and omits empty tables. Given report content When inspected Then it contains no raw audio or transcripts.
CSV/JSON export and API access for coaching logs
Given an authorized Operations Manager selects Export CSV for a date range and agency When the export completes Then the CSV contains one row per coaching event with defined headers (event_id, visit_id, lesson_id, lesson_version, trigger_source, start_timestamp, end_timestamp, caregiver_id, playback_completion, repeat_verification_outcome, device_state, network_state, signature_status) and the row count equals the number of events returned; file downloads in ≤30 seconds for ≤50k events. Given Export JSON is selected When the export completes Then the file is a valid JSON array conforming to the documented schema and contains no raw audio. Given an API client with token scope coaching:read calls GET /api/v1/coaching-events with filters {date_from, date_to, caregiver_id?, lesson_id?, visit_id?} When the request is processed Then the API returns 200 with paginated results (default page_size=100, max=1000), includes total_count, supports sort=start_timestamp, enforces rate limit 60 req/min, and returns 401/403 for invalid/insufficient auth. Given ETag caching headers are provided When the same query is repeated with If-None-Match and no data changes Then the API returns 304 Not Modified.
HIPAA data minimization and role-based access enforcement
Given any micro‑lesson playback or repeat verification When audio is processed Then the system stores no raw audio, voiceprints, or transcripts at rest; only derived fields (e.g., repeat_verification_outcome, optional confidence_score) and lesson metadata are persisted. Given any UI, export, or API response for coaching logs When content is reviewed Then no raw audio or transcripts are present. Given a user role attempts to access coaching logs When authorization is evaluated Then permissions are enforced: Admin/Auditor/Compliance/Ops Manager can view all events; Caregiver can view only their own events; other roles receive 403; all access attempts are logged with user_id, timestamp, and outcome. Given a role change is made When the user performs a subsequent request Then the new permissions take effect within ≤1 minute.
Retention controls and defensible purge
Given a retention policy is configured to N years When an event exceeds its retention period Then event data (excluding immutable audit markers) is purged within ≤24 hours while retaining an immutable deletion marker (event_id, hash, deleted_at, policy_id) in the audit trail. Given a legal hold is placed on an event, visit, caregiver, or agency When retention would otherwise purge data Then purge is deferred until the hold is cleared; the deferral is logged. Given an admin initiates a manual purge for a visit via UI or API When executed Then the system requires dual authorization by two distinct admins within 24 hours, presents an impact summary (counts by entity), and logs the request and outcome with digital signatures.
Immutable audit trail with digital signatures
Given any create/read (sensitive)/export/update/delete action on coaching logs or settings When the action occurs Then an append‑only audit entry is written with actor, action, target_id, timestamp (UTC), request_id, pre/post hashes where applicable, and a digital signature (e.g., Ed25519) and persisted in WORM storage. Given an auditor runs Verify Audit Log for a period When verification completes Then 100% of entries validate signature and chain integrity; any failure triggers a high‑severity alert and is surfaced in the report. Given an attempt is made to modify or delete an existing audit entry When processed Then the system blocks the change; only new entries can be appended; the attempt is logged and alerted.
Completion rates and gaps dashboards
Given a user with Ops Manager or Compliance role selects a date range and filters (agency, caregiver, lesson) When the Coaching Completion Dashboard loads Then it displays KPIs: playback completion rate, repeat verification pass rate, lessons with lowest compliance, caregiver‑level gaps, and trend over time; data freshness SLA ≤15 minutes. Given a KPI or chart is clicked When drill‑down is requested Then the user sees the underlying events/visits with filters applied within ≤3 seconds for ≤5k events. Given the user exports dashboard data When Export CSV is clicked Then the CSV reflects the current filters and the row count matches the displayed dataset. Given dashboard content is rendered When reviewed Then no raw audio is present; only aggregate metrics and event metadata are displayed.
Multilingual & Accessibility Audio
"As a bilingual caregiver, I want lessons in my preferred language and audio settings so that I can understand and repeat prompts accurately."
Description

Support lesson content and prompts in multiple languages and dialects with caregiver-level language preferences and automatic fallback. Provide adjustable playback speed, volume boost, and optional haptic cues for noisy environments or hearing assistance. Validate TTS quality (SSML support, pronunciation dictionaries) and allow upload of human-recorded voice. Ensure WCAG-compliant captions/transcripts accessible when hands-free is not required and provide quick-switch language commands via voice.

Acceptance Criteria
Preferred Language with Automatic Fallback
Given a caregiver profile with language preference "Spanish (Mexico)" and agency default "English (US)" And the lesson "Hand Hygiene" has audio versions for base "Spanish" and "English (US)" When the caregiver starts the lesson in Hands‑Free Coach Then the system plays the "Spanish" audio for that lesson And the active language is announced/indicated within 1 second And fallback order is: preferred dialect -> preferred base language -> agency default dialect -> agency default base language -> product default English (US) And the system logs chosen language, dialect, and fallback reason in session metadata
Adjustable Playback Speed, Volume Boost, and Haptic Cues
Given a micro‑lesson is playing When the caregiver issues voice commands: "speed up", "slow down", "set speed to one point five", "volume boost on/off", or "haptics on/off" Then playback speed changes to one of [0.75x, 1.0x, 1.25x, 1.5x] within 500 ms without pausing audio And volume boost cycles baseline -> +6 dB -> +12 dB within 500 ms And when haptics are on, a 200–300 ms vibration occurs before call‑and‑repeat prompts and at segment transitions And changes are confirmed with a tone and/or haptic tick within 300 ms And settings persist for the current session and default to saved caregiver preferences on next session
TTS SSML and Pronunciation Dictionary Quality
Given a lesson rendered via TTS with SSML tags <prosody>, <break>, <emphasis>, <say-as>, <phoneme>, and <lang> When the audio is generated Then all tags are honored without SSML parsing errors And custom pronunciation dictionary entries are applied (e.g., medical terms and agency names) And ≥95% of words in a 100‑word test list with custom entries are pronounced per provided phoneme/IPA guidance as validated by a bilingual reviewer And SSML <lang> segments switch to the correct voice/language for those spans
Human‑Recorded Voice Upload and Selection
Given a content author uploads a human‑recorded audio file for a lesson When the file is WAV or MP3, mono, 16–48 kHz, 20–30 seconds, ≤10 MB Then the upload succeeds, silence is trimmed to ≤300 ms total, and loudness normalized to −16 ±2 LUFS And a preview is available within 3 seconds And the file can be assigned to a specific language/dialect for that lesson And playback uses the human recording for that language/dialect; otherwise it falls back per the defined order And uploaded files are virus‑scanned and stored encrypted at rest
WCAG‑Compliant Captions and Transcripts
Given hands‑free mode is off and the screen is active When a micro‑lesson plays Then captions can be toggled and are synchronized within ±500 ms of speech And caption text meets WCAG 2.2 AA contrast (≥4.5:1) and can be resized to 200% without loss And captions/transcripts carry correct language tags and are readable via screen readers with proper focus order And a full transcript in the active language is available in‑app and downloadable as .txt And captions are dismissible via voice and touch and do not obscure critical controls
Quick‑Switch Language via Voice Command
Given a micro‑lesson is playing When the caregiver says a supported command such as "Switch to Spanish", "En español", or "Change language to Tagalog" Then the active language switches within 800 ms if installed And the system confirms the change via brief voice and haptic cue And if the requested dialect is unavailable, the nearest available option is announced and applied or a confirmation is requested And recognition accuracy for supported switch commands is ≥95% in quiet and ≥85% with 70 dBA background noise
Per‑Caregiver Language Settings Persistence and Audit
Given a caregiver sets language to "Tagalog (Filipino)" When they start subsequent Hands‑Free Coach lessons or after sign‑out/sign‑in Then that preference is applied by default And any fallback event records timestamp, lesson ID, requested dialect, selected language, and reason in an audit log And supervisors can view this history in the caregiver profile And applying a new preference takes effect within 2 seconds

Nudge Studio

An authoring hub for educators to create, localize, and A/B test cards. Drag in screenshots, short clips, or sample text; tag to forms, payers, and triggers; schedule rollouts; and track impact—no IT ticket required.

Requirements

Drag-and-Drop Card Composer
"As an educator, I want to drag and drop media to compose in-app cards so that I can quickly publish clear guidance without needing IT support."
Description

Provide a WYSIWYG authoring interface to create in-app guidance cards with drag-and-drop support for images, short video clips, and sample text. Include rich text formatting, reusable templates, inline media embedding, and metadata fields (title, tags, objectives). Enable live mobile previews for iOS/Android form factors, autosave, versioning with compare/rollback, and accessibility checks (contrast, captions, alt text). Validate content for safe HTML and size limits. Integrate with CarePulse objects so cards can reference forms, visit types, and payer-specific policies. Result: educators can rapidly produce polished, compliant cards without engineering assistance, and caregivers will see consistent, high-quality content in the mobile app.

Acceptance Criteria
Drag-and-Drop Media with Inline Embedding and Size Limits
Given the card composer is open and the cursor is placed, when a user drags an image (jpg/png/gif) or short video (mp4/mov) into the editor, then the file uploads, shows a progress indicator, and is inserted inline at the cursor with an appropriate placeholder/thumbnail and playback controls for video. Given paste-from-clipboard contains an image, when the user pastes, then the image is uploaded and inserted inline at the cursor. Given configured media size limits are set to images=5 MB and videos=50 MB for the test environment, when a user attempts to add a file exceeding the respective limit, then the upload is blocked and an error message specifies the exceeded limit. Given a dropped file type is not in the allowed list, when the user drops it, then the system rejects it and displays an error listing allowed types. Given an upload in progress, when the user cancels the upload, then the partially uploaded asset is discarded and no placeholder remains in the content.
Rich Text Formatting and Safe HTML Sanitization
Given the user selects text, when Bold/Italic/Underline/Heading (H1–H3)/Bulleted List/Numbered List/Link formatting is applied, then the content updates immediately and persists on save and preview. Given the user pastes HTML containing script tags, event handlers (e.g., onclick), or disallowed iframes, when the content is inserted, then the sanitizer strips disallowed attributes/tags and preserves only the allowed whitelist without breaking structure. Given malformed HTML is pasted, when it is sanitized, then the resulting content is valid and renders identically in composer and mobile previews. Given a link is inserted, when the user sets target and title, then the attributes are preserved and pass sanitizer rules.
Reusable Templates: Create, Save, Apply
Given a card is open, when the user chooses Save as Template and provides a unique template name and optional tags, then the system creates a reusable template capturing layout, formatting, and placeholders but not card-specific audit/version history. Given templates exist, when the user applies a selected template to a new card, then the template’s content and metadata placeholders populate the editor without overwriting any existing unsaved content unless the user confirms replace. Given a template is updated, when the user opts to apply updates to an existing card, then a preview diff is shown and, upon confirm, changes are merged without duplicating media assets. Given template permissions are standard, when a user without edit permissions attempts to modify a template, then the action is blocked with a permission error.
Metadata and CarePulse Object Linking
Given the metadata panel is open, when the user enters a Title between 3 and 120 characters and Objectives between 10 and 300 characters, then the Save action is enabled; otherwise, inline validation messages are shown and Save is disabled. Given the user adds up to 10 Tags with max 30 characters each, when tags are entered, then they are tokenized and searchable. Given CarePulse Forms/Visit Types/Payer Policies are searchable, when the user types at least 2 characters in the reference picker, then matching objects return within 500 ms and can be attached as references (chips) to the card. Given a referenced CarePulse object is deleted or the user loses access, when the card is opened or published, then the card displays a Missing Reference warning and publish is blocked until the reference is removed or replaced. Given a referenced object is attached, when the card is published, then the reference metadata is stored with object IDs and is retrievable via API.
Live Mobile Previews for iOS and Android
Given the user edits content, when a change is made, then the iOS and Android previews update within 2 seconds to reflect the latest content, styles, and inline media. Given device presets are available (iPhone/Android Phone/Tablet), when a preset is selected, then the preview viewport adjusts to the correct dimensions and breakpoints. Given a video clip is embedded, when previewed, then the video shows a playable control and a poster frame; if the source is still uploading, a processing state is shown. Given light/dark mode toggles, when the mode is switched, then the preview accurately renders the theme and contrast warnings update accordingly.
Autosave and Versioning with Compare/Rollback
Given the autosave interval is configured to 10 seconds, when the user is actively editing, then changes are autosaved at least every 10 seconds and a Saved indicator with timestamp appears without interrupting typing. Given the browser/app closes unexpectedly, when the user reopens the composer within 30 minutes, then the last autosaved draft is restored and the user can choose to continue or discard. Given a manual Save or Publish occurs, when committed, then a new immutable version is created recording user, timestamp, and change note (optional). Given two versions exist, when the user selects Compare, then a side-by-side diff shows text and metadata changes, and media add/remove/replace events with thumbnails. Given a prior version is selected, when the user clicks Rollback and confirms, then the editor content and metadata revert to that version and a new version is created documenting the rollback action. Given another editor has the same card open, when a save is attempted, then the system detects the conflict and prompts to review differences before proceeding to prevent silent overwrite.
Accessibility Checks: Contrast, Captions, Alt Text
Given the card contains text on backgrounds, when the accessibility checker runs automatically on save and preview, then any text failing WCAG 2.1 AA contrast (4.5:1 normal, 3:1 large) is flagged with location and suggested fixes. Given an image is inserted, when the user attempts to publish without alt text, then publishing is blocked and an inline prompt requires alt text or marks the image as decorative with explicit confirmation. Given a video with audio is embedded, when the user attempts to publish without captions or a transcript (SRT/VTT or text), then publishing is blocked until captions/transcript are provided. Given all accessibility issues are resolved or acknowledged (where allowed), when the user publishes, then the accessibility check passes with a green status and is logged with the publish event.
Targeting & Trigger Rules
"As an educator, I want to target cards to specific forms, payers, and event triggers so that caregivers see only the most relevant guidance at the right time."
Description

Implement a rule builder to target cards by audience and context and to control when they appear. Support tagging to forms, payers, visit types, caregiver roles, locations, routes, and states. Provide event-based triggers such as “on open form,” “on route start,” “on missing signature,” or “after visit if time > SLA,” including IoT-derived signals (e.g., door sensor). Allow inclusion/exclusion rules, frequency caps, and priority resolution when multiple cards match. Ensure low-latency evaluation on-device with offline caching and server-side sync. Outcome: caregivers receive relevant nudges at the precise moment of need, improving compliance and reducing documentation time.

Acceptance Criteria
Audience Tag Matching Across Contexts
Given a card with include tags Form=F1, Payer=P1, VisitType=V1, Role=R1, Location=L1, Route=Rt1, State=S1 and a caregiver session with matching context When the context is evaluated on-device Then the card is eligible Given a card with include tags for one or more dimensions and the caregiver lacks at least one required tag When evaluated Then the card is not eligible Given a card with both include and exclude tags where the caregiver matches any exclude tag When evaluated Then the card is not eligible even if all include tags match Given a card with no tag defined for a dimension When evaluated Then that dimension is treated as a wildcard Given a representative mid-tier device and cached rules When evaluating audience tags Then the evaluation completes within 150 ms (p95) and 300 ms (p99) without a server call
Event-Based Triggers (Form, Route, Signature, SLA, IoT)
Given a form F1 opens When a card has trigger=on_open_form and matches audience Then the nudge renders within 300 ms after the form is loaded and is shown once per form open Given route status transitions to In Progress When a card has trigger=on_route_start and matches audience Then the nudge renders within 1 s and fires once per route Given an attempt to submit visit notes without a required signature When a card has trigger=on_missing_signature and matches audience Then the nudge displays within 300 ms and does not re-display until the signature is captured or the form is closed Given a completed visit whose actual duration exceeds the configured SLA threshold When a card has trigger=after_visit_if_time_gt_SLA and matches audience Then the nudge displays within 5 s of check-out Given an IoT door sensor event "door_open" associated with the current patient address When a card has trigger=on_door_open and matches audience Then the nudge displays within 2 s of receiving the event and deduplicates repeated events within 60 s Given the device is offline When triggers occur that are detectable locally Then local triggers fire and server-only triggers defer until connectivity returns
Inclusion vs. Exclusion Rule Precedence
Given include list VisitType=Admission and exclude list Payer=Medicaid When a caregiver with VisitType=Admission serves a Medicaid patient Then the card is not eligible Given include list VisitType=Admission and empty exclude list When a caregiver with VisitType=Admission serves a non-Medicaid patient Then the card is eligible Given empty include list and exclude list containing Payer=P1 When context has Payer=P1 Then the card is not eligible Given both include and exclude lists are empty When evaluated Then the card is eligible for all audiences subject to trigger conditions Given hierarchical tags where Location L1 is within State S1 When include State=S1 and exclude Location=L1 Then caregivers in L1 are not eligible and caregivers in other locations within S1 are eligible
Frequency Capping and Throttling
Given cap N=1 per caregiver per card per day When the card is shown once Then subsequent matching triggers that day do not display the card Given cap N=3 per caregiver per card per rolling 7 days When the card has been shown 3 times within the window Then further displays within 7 days are suppressed Given a frequency cap is changed in Nudge Studio When the device syncs successfully Then the new cap takes effect within 60 s and applies to future displays Given the app restarts or the device reboots When evaluating caps Then prior exposure counts persist and are respected Given suppression by cap is active When the window expires or a new route/day boundary is reached (as configured) Then the card becomes eligible again on the next matching trigger
Priority Resolution for Multiple Matching Cards
Given two or more cards are eligible for the same trigger and audience When evaluated Then the card with the highest Priority value is selected Given a tie on Priority When evaluated Then select the card with the most recent publish_time and if still tied select the lexicographically smallest card_id to ensure determinism Given card A is selected and displayed for a trigger When lower-priority card B is also eligible Then card B is not displayed in the same trigger slot Given priorities are updated in Nudge Studio When the device next syncs successfully Then resolution uses updated priorities and yields the same selection offline and online for identical inputs
On-Device Evaluation, Offline Caching, and Sync
Given rules are cached locally with version and checksum When the app is offline Then audience evaluation and trigger checks proceed using cached rules Given the device receives updated rules from the server When sync completes Then the cache updates atomically and the new rules are used for the next evaluation without partial application Given prolonged offline operation up to 7 days When evaluating with cached rules Then functionality continues without server calls and a re-sync is attempted within 30 s of connectivity Given a representative mid-tier device and cached rules When evaluating eligibility for a trigger event Then p95 local evaluation latency is <=150 ms and p99 is <=300 ms Given events and impressions are recorded while offline When connectivity is restored Then they sync to the server within 60 s without duplication
Scheduled Rollouts & Approval Workflow
"As a program manager, I want to schedule rollouts and require approvals so that changes are controlled, auditable, and can be rolled back if issues arise."
Description

Enable creators to schedule card releases with start/end times, phased percentage rollouts, and blackout windows. Add a lightweight approval workflow with configurable roles (creator, reviewer, approver) and tracked sign-offs. Maintain a change log capturing who changed what and when, with the ability to pause or roll back a rollout instantly. Provide notifications to stakeholders on submit/approve/publish events. Integrate with CarePulse tenancy to scope rollouts to specific agencies or regions. Outcome: controlled, auditable deployments that minimize risk and align with compliance requirements.

Acceptance Criteria
Scheduled Release Windows with Time Zone Support
Given a creator sets a rollout start and end date-time with a time zone, When the schedule is saved, Then the system validates that end > start, the time zone is recognized, and returns field-level errors for invalid inputs. Given a rollout is approved and current UTC time is before the start time, When targeted users access CarePulse, Then the card is not delivered and the rollout status displays "Scheduled". Given current UTC time is between the approved start and end time, When targeted users access CarePulse, Then the card is eligible for delivery subject to rollout percentage and blackout rules. Given current UTC time is at or after the approved end time, When targeted users access CarePulse, Then the card is not delivered and the rollout status displays "Ended". Given users view the schedule details, When rendered in the UI, Then all times are displayed in the viewer’s local time zone with UTC shown in parentheses.
Phased Percentage Rollouts
Given a creator defines percentage phases (e.g., 10%, 50%, 100%) with effective times relative to start, When the plan is saved, Then the system enforces chronological order and a maximum cumulative allocation of 100%. Given the rollout starts, When assigning recipients, Then user/device assignment is deterministic and stable across sessions so individuals remain in or out of the cohort per phase. Given a phase change time is reached, When the system evaluates eligibility, Then the delivered cohort increases to the configured percentage within 5 minutes without exceeding the target. Given a creator edits the percentage plan, When the change is approved, Then the new plan takes effect within 5 minutes and the change is recorded in the audit log.
Blackout Windows Enforcement
Given blackout windows are configured for a rollout, When the current time falls within a blackout window in the target region’s local time zone, Then no deliveries occur and no new users are added to the cohort. Given overlapping or adjacent blackout windows exist, When evaluating delivery eligibility, Then the blackout periods are merged and enforced without gaps. Given a blackout window is added or modified after approval, When the change is approved, Then deliveries scheduled during the blackout are skipped and the status shows "Blocked by blackout" during that period.
Role-Based Approval Workflow and Notifications
Given a Creator prepares a draft rollout, When they submit for review, Then Reviewers receive in-app and email notifications within 2 minutes containing the card title, scope, scheduled start, and a link to review. Given all required Reviewers approve, When the item advances to Approver, Then Approvers are notified within 2 minutes and only users with the Approver role can approve. Given an Approver approves the rollout, When the approval is recorded, Then the rollout is locked from change (except via change request), its status becomes "Approved", and Creators/Reviewers/Approvers receive a confirmation notification. Given any approver or reviewer requests changes, When the request is submitted with comments, Then the rollout returns to "Draft", prior approvals are invalidated, and all stakeholders are notified. Given a user without the necessary role attempts to perform an action (submit, approve, publish, pause, rollback), When the action is attempted, Then the system denies the action and records the attempt in the audit log.
Change Log and Audit Trail
Given any change to content, tags, triggers, schedule, rollout percentages, blackout windows, target scope, or approvals, When the change is saved, Then the change log records actor ID, role, timestamp (UTC), field name, previous value, new value, and version number. Given users with Reviewer or higher permissions view the change log, When they open history, Then they can see a chronological list and per-change diff without the ability to edit or delete entries. Given an attempt is made to modify or delete a change-log entry, When executed by any role, Then the system prevents the modification and records the denied attempt.
Pause and Rollback Controls
Given a rollout is active, When an authorized user clicks Pause, Then no new deliveries occur within 2 minutes, the rollout status changes to "Paused", and the event is logged. Given a rollout is paused, When an authorized user clicks Resume, Then deliveries resume using the current schedule and percentage plan and the event is logged. Given a prior approved version exists, When an authorized user triggers Rollback to that version, Then the card content and configuration revert to the selected version, approvals are required before resuming delivery, and the event is logged. Given a rollback or pause is initiated, When executed, Then stakeholders (Creator, Reviewers, Approvers) receive notifications within 2 minutes with the action type, actor, timestamp, and link to details.
Tenancy Scoping to Agencies and Regions
Given a creator selects specific agencies and/or regions in CarePulse, When the rollout is validated, Then the system requires at least one target scope and prevents submission without it. Given a rollout is approved with defined scopes, When end users from non-targeted agencies or regions query for cards, Then the API returns no card for them. Given multiple regions with different time zones are targeted, When evaluating start/end times and blackout windows, Then evaluations are performed in each region’s local time zone. Given scope membership changes in CarePulse, When the directory sync updates, Then subsequent deliveries honor the updated scope within 15 minutes.
Media Transcoding & Asset Management
"As an educator, I want media to be automatically optimized and securely delivered so that cards load quickly and reliably for caregivers in the field."
Description

Introduce an asset pipeline for uploaded screenshots and short clips: virus scan, metadata extraction, automatic transcoding to mobile-optimized formats and bitrates, thumbnail generation, caption/alt-text attachment, and image compression. Store assets in encrypted, versioned storage with signed URLs and CDN delivery for low-latency global access. Provide usage quotas, lifecycle policies (expiry/archival), and duplicate detection. Expose an asset library with search, tags, and reuse across cards. Outcome: fast-loading, secure media that performs well on variable mobile networks and devices.

Acceptance Criteria
Upload Intake: Virus Scan and Metadata Extraction
Given a user uploads an image or short video (<= 500 MB) to Nudge Studio, When the upload completes, Then the asset is scanned for malware before any processing or storage in the asset library. Given the scanner detects malware, When the scan completes, Then the upload is rejected, the asset is not stored, the user is shown an error message with a unique incident ID, and an audit log entry is created. Given the scanner finds no threats, When processing begins, Then technical metadata is extracted and stored (MIME type, dimensions, duration, codec, filesize, EXIF where available, and SHA-256 content hash). Given an unsupported file type is uploaded, When validation runs pre-upload, Then the upload is blocked with a list of allowed types displayed. Given a successful upload, When viewed in the asset details panel, Then extracted metadata and content hash are visible to the user.
Automatic Transcoding for Mobile Delivery
Given a short video clip is uploaded, When processing completes, Then the system generates at least three bitrate renditions (low ≤ 360p ≤ 400 kbps, medium ≤ 480p ≤ 800 kbps, high ≤ 720p ≤ 1500 kbps) in H.264/AAC MP4 and an adaptive HLS manifest. Given the original aspect ratio, When renditions are created, Then the aspect ratio is preserved without unintended stretching or cropping. Given the asset is requested from a mobile device on a constrained network (simulated 1.5 Mbps), When playback starts, Then the HLS manifest selects an appropriate rendition and begins playback within 3 seconds in 95% of tests. Given a transcoding failure occurs, When processing fails, Then the asset status is marked "Processing Failed," the user is notified, logs capture error details, and no partial renditions are published.
Thumbnail Generation
Given a video asset is uploaded, When processing completes, Then at least one poster thumbnail is generated at 320x180 and 640x360 and stored with the asset. Given an image asset is uploaded, When processing completes, Then responsive thumbnails are generated at widths 320, 640, and 1280 with optimized compression. Given a user views the library grid, When thumbnails are loaded, Then the appropriate size is delivered based on the client viewport and device pixel ratio. Given the asset owner uploads a custom poster, When saved, Then the custom thumbnail supersedes the auto-generated one.
Caption and Alt-Text Attachment
Given a video asset, When a user uploads a WebVTT (.vtt) caption file, Then the caption track is validated, attached to the asset, and exposed in playback. Given an image asset, When saved without alt text, Then the system requires alt text or an explicit "decorative" flag before the asset can be published to a card. Given localized captions, When multiple caption files are uploaded with language tags (BCP 47), Then the asset stores and serves the correct track based on the viewer's locale with a manual override. Given alt text is added, When the asset is embedded in a card, Then the alt attribute is present in the rendered markup and is retrievable via API.
Secure Versioned Storage, Signed URLs, and CDN Delivery
Given any asset at rest, When inspected in storage, Then it is encrypted with AES-256 (or provider-managed equivalent) and bucket policies disallow public access. Given a request to view or download an asset, When the link is generated, Then it is a time-limited signed URL with configurable expiry (default 15 minutes) and single-tenant scoping. Given an asset is updated with a new file, When saved, Then a new immutable version is created with previous versions still retrievable by admins and referenced cards continuing to resolve to the pinned version unless explicitly updated. Given global access, When an asset is requested from North America, Europe, and APAC test regions, Then the CDN serves the asset with p95 TTFB ≤ 200 ms for thumbnails and ≤ 400 ms for 720p video segment requests. Given a signed URL is revoked or expired, When accessed, Then the request is denied with 403 and no bytes of the asset are returned.
Usage Quotas and Lifecycle Policies
Given tenant-level storage quotas are configured, When total stored bytes exceed the soft threshold (e.g., 90%), Then an alert is sent to tenant admins and warnings appear in Nudge Studio. Given the hard quota is reached, When a new upload is attempted, Then the upload is blocked with an explanatory message and a link to manage storage. Given per-asset lifecycle settings, When an expiry date is reached, Then the asset is either archived to lower-cost storage or deleted per policy, with a 7-day recoverable grace period for deletion. Given lifecycle actions occur, When completed, Then entries are recorded in the audit log with actor, timestamp, and action, and affected cards list missing assets for remediation.
Duplicate Detection, Library Search, and Reuse Across Cards
Given a user uploads an asset whose SHA-256 matches an existing one, When the match is detected, Then the user is prompted to reuse the existing asset with visibility into current usage count, and duplicate binary storage is avoided. Given the asset library is opened, When a user searches by filename, tag, content type, or EXIF/metadata fields, Then relevant results are returned with p95 query latency ≤ 300 ms for libraries up to 100k assets. Given an asset is reused across multiple cards, When the source asset is updated to a new version, Then cards continue to reference their pinned version until the owner explicitly updates them, with a diff view available.
Impact Analytics & Attribution Dashboard
"As an operations lead, I want to see how nudges affect compliance and efficiency so that I can scale what works and retire what doesn’t."
Description

Provide out-of-the-box analytics to measure nudge reach and outcomes: impressions, dismissals, click-throughs, and downstream task completion (e.g., form completion, reduced errors, on-time visits). Support pre/post and cohort comparisons, payer-specific breakdowns, and customizable primary metrics. Attribute impact using rules linking a card view to subsequent events within a configurable window. Surface dashboards, email summaries, and CSV export. Ensure privacy-aware aggregation and tenant isolation. Outcome: stakeholders can quantify effectiveness, prioritize content, and demonstrate ROI.

Acceptance Criteria
Core Reach & Outcome Metrics Availability
Given a date range is selected and at least one card is active, When the Impact Analytics dashboard loads, Then it shows for the selected scope: total impressions, unique viewers, dismissals, clicks, CTR = clicks/impressions (as %), and downstream outcomes (form completions, on-time visits, error corrections) per card and in aggregate. Given new impression/dismissal/click/outcome events are generated, When 15 minutes have elapsed, Then the dashboard reflects the new events and counts match the event store within ±0.5% for counts ≥1,000 or within ±5 events otherwise. Given I change time range or filters, When the dashboard reloads, Then all widgets, charts, and tables update consistently and complete rendering within 3 seconds for up to 90 days of data. Given metric info icons are present, When I open a tooltip, Then it displays the metric definition and calculation formula exactly matching the product documentation.
Pre/Post and Cohort Comparison Analysis
Given I select non-overlapping, equal-duration pre and post windows, When I run a comparison for a chosen card or card group, Then the dashboard shows for the primary metric and outcomes: pre value, post value, absolute delta, percent change, and sample sizes. Given I define treatment (exposed) and control (not exposed) cohorts via filters (payer, location, role) or experiment assignment, When I run the cohort comparison, Then the dashboard displays cohort-level metrics and, when both periods are set, a difference-in-differences delta. Given any cohort/period cell has fewer than k=10 events, When results are displayed or exported, Then the value is suppressed and labeled "insufficient data". Given filters are applied, When I export or share the comparison, Then the resulting artifact preserves the same filters, cohorts, and time windows.
Payer-Level Drilldowns
Given multiple payers exist, When I select Group by Payer or apply a payer filter, Then the dashboard presents per-payer metrics (impressions, CTR, outcomes) with counts and percentages, sortable by any visible metric. Given I click a payer row, When the drilldown opens, Then I see the top cards for that payer with their impressions, CTR, attributed conversions, and conversion rate. Given a payer has fewer than k=10 events in the period, When grouped results are generated, Then that payer is suppressed or aggregated into "Other" to preserve privacy. Given I export the payer table, When the CSV downloads, Then it includes payer_id, payer_name, applied filters, and all visible columns for the selected time range.
Custom Primary Metric & KPI Pinning
Given I have Manage Analytics permission, When I open dashboard settings, Then I can select a primary metric from [CTR, form completions, on-time visits, error rate reduction] and set a default attribution model [first-touch, last-touch]. Given I change the primary metric, When I save settings, Then all summary widgets and charts recalculate to highlight the chosen primary metric and the choice persists per user across sessions/devices. Given I pin up to 5 KPIs to the top bar, When I reload the dashboard or receive scheduled emails, Then the same KPIs appear in the same order with any configured targets. Given a tenant admin sets a tenant-wide default primary metric, When a new user accesses the dashboard, Then that default is applied until overridden by the user.
Attribution Window & Rule-Based Impact Linking
Given an attribution window (5 minutes to 7 days) and eligible conversion events are configured, When a user views a card, Then any qualifying event by the same user within the window is attributed per the selected model (first-touch or last-touch) and de-duplicated to one conversion per user per card per window. Given a user sees multiple variants in an A/B test, When a conversion occurs, Then attribution is assigned to the appropriate variant per model and experiment assignment and no cross-variant double counting occurs. Given the viewer is an internal tester or the view occurs in a test environment, When attribution is computed, Then those events are excluded from reach and conversion metrics. Given attribution settings are changed, When I view historical data, Then recalculation applies only from the change timestamp forward and prior data are labeled with the previous attribution version.
Email Summaries & CSV Export Compliance
Given I schedule a weekly summary for Monday 08:00 in my tenant time zone, When the time occurs, Then recipients receive an email within 15 minutes containing the selected time window, primary metric trend, top/bottom 5 cards by lift, notable deltas, and links back to the filtered dashboard. Given I export any analytics table (cards, payers, cohorts), When the export completes, Then the CSV includes headers, ISO 8601 timestamps, tenant_id, applied filters, and rows matching the on-screen sort; counts below k=10 are masked as "<10". Given my role lacks Export or Subscribe permissions, When I attempt those actions, Then the controls are disabled and an explanatory message is displayed. Given I unsubscribe from summaries, When the next schedule runs, Then I receive no email and the unsubscribe event is recorded in the audit log.
Tenant Isolation & Privacy-Aware Aggregation
Given I am authenticated to tenant A, When I access analytics via UI or API, Then only data with tenant_id = A are returned; cross-tenant access attempts return 403 and are logged with user, time, and endpoint. Given any metric slice has fewer than k=10 contributing users or events, When rendering or exporting, Then the value is suppressed or bucketed into "Other" and no raw identifiers are shown. Given dimensions could include user or patient context, When dashboards or exports are generated, Then PII/PHI fields are excluded or irreversibly hashed; only aggregated, non-identifying metadata are included. Given cross-tenant batch jobs execute, When a failure occurs, Then no artifact (email, CSV, cache) contains another tenant’s data; automated tests validate isolation boundaries in CI.
Localization & Variant Management
"As a localization manager, I want to manage translations and locale-specific variants so that caregivers receive guidance in their language and context."
Description

Enable multi-language support with key-based strings, media alternates per locale, and automatic fallbacks. Support right-to-left layouts, locale-aware formatting (dates, numbers), and region-specific policy references. Provide translation workflows: export/import (CSV/XLIFF), translation memory, and in-context preview on target devices. Allow audience targeting by locale and create content variants for cultural or payer differences while maintaining a shared version history. Outcome: caregivers receive accurate, culturally appropriate guidance in their preferred language without duplicating content.

Acceptance Criteria
Key-Based Strings with Automatic Locale Fallback
Given a card uses key-based strings with a base locale en-US and translations for es-MX and fr-FR When a caregiver’s device locale is es-MX Then 100% of rendered strings use es-MX values with 0 missing keys And the payload contains no hardcoded literals (key coverage = 100%) When a caregiver’s device locale is pt-BR (unsupported) Then all strings fall back to en-US values And a localization log records a pt-BR fallback with the count of keys rendered via fallback When a specific key is missing in es-MX but present in en-US Then only that key falls back to en-US while other keys remain in es-MX And the build linter flags the missing translation as Warning Rule: Localization lookup adds ≤200ms overhead versus base-only rendering for 100 keys on a mid-tier device
Locale-Specific Media Alternates and Fallbacks
Given a card defines locale-tagged media assets (default, es-MX, ar) with localized alt text/captions When the device locale is es-MX Then the es-MX media URL is rendered and the alt/caption text is in es-MX When a locale-specific media asset is missing Then the default media is rendered without 404 and a fallback event is logged And CDN cache keys vary by locale so switching locales updates media on refresh within ≤1 second Rule: Client prevents publish if any locale-tagged media exceeds 5 MB for mobile delivery Rule: Subtitle/caption track selection matches the active locale; if missing, default track is used
Right-to-Left (RTL) Layout and Bidirectional Text Support
Given the card is previewed and published in ar locale When rendered in Nudge Studio preview and on an Android test device Then layout direction is RTL: text aligns right, list markers and chevrons are mirrored, and progress arrows flip except semantic direction icons And mixed LTR tokens (numbers, codes) are wrapped with bidi isolation so their order is preserved And text truncation ellipses appear on the left for RTL strings Rule: All glyphs use fonts that fully cover Arabic script; no tofu; accessibility contrast meets WCAG AA
Locale-Aware Date, Time, Number, and Currency Formatting
Given copy contains placeholders {date}, {time}, and {amountUSD} When locale=en-US and date=2025-09-05T14:30:00-05:00 and amountUSD=12345.67 Then rendered examples include “Sep 5, 2025” and “2:30 PM” and “$12,345.67” When locale=fr-FR with the same values Then rendered examples include “5 sept. 2025” and “14:30” (24h) and “12 345,67 $US” Rule: ICU message/plural formats are used; placeholder order is preserved across locales Rule: No string concatenation of formatted parts; each token is formatted per active locale and timezone
Region- and Payer-Specific Policy References with Fallback
Given a card includes a policy tag {policy:wound_care} and policies exist for US (national), US-TX, and payer BlueCare When caregiver profile is region=US-TX and payer=BlueCare Then the link resolves to the most specific active document (US-TX + BlueCare) returning HTTP 200 within ≤1.5s When a region- or payer-specific document is missing Then the link falls back to the next less-specific level (e.g., US national) and logs the fallback Rule: Link label and surrounding text are localized per active locale Rule: Publish validation fails if no policy is available at any level for the referenced tag
Translation Workflow: Export/Import, Translation Memory, In-Context Preview
Given an author selects 3 cards and locales es-MX, ar, fr-FR When exporting to CSV and XLIFF Then files include deterministic keys, source text, developer notes, character limits, and screenshot references When re-importing completed translations Then only localized fields are updated; base strings remain unchanged; audit history records user, time, and diffs And translation memory proposes suggestions for segments with similarity ≥85%; accepted suggestions auto-fill and are tracked as TM-reused When previewing in-context on iOS and Android test devices Then the target locale renders within ≤2 seconds, preserves placeholders, and flags over-limit text visually Rule: Plural categories required by each locale must be provided or publish is blocked
Audience Targeting and Content Variants with Shared Version History
Given a base card with variants for es-MX, en-US PayerA, and en-US PayerB When a caregiver matches locale=en-US and payer=PayerB Then the en-US PayerB variant is delivered; if no locale-specific variant exists, the base locale is used Rule: Targeting precedence is device locale → payer → experiment; rules cannot conflict Rule: All variants share a unified version history; editing base creates Version N and flags all variants as “Needs re-translation” Rule: A variant cannot be published while its base is in Draft; attempt returns a validation error Rule: Analytics attribute impressions and outcomes to variant ID and locale; rollups by base and by variant are available
A/B Testing & Experiment Orchestration
"As an educator, I want to run experiments on card variants so that I can identify and standardize the most effective guidance."
Description

Add experiment capabilities to create multiple card variants, allocate traffic (fixed split or ramp), and define primary/secondary metrics sourced from CarePulse events. Implement consistent user bucketing, eligibility filters, holdout groups, and mutually exclusive experiment groups to avoid cross-test interference. Provide guardrails (e.g., error rate, latency) and automatic pause criteria. Display experiment readouts with confidence indicators and sample-size progress. Integrate with analytics for attribution and with rollout scheduler for progressive delivery. Outcome: educators learn which content performs best and systematically improve caregiver outcomes.

Acceptance Criteria
Author Creates Experiment With Fixed Split and Ramp Schedule
Given an educator defines two or more variants and a fixed traffic split that totals 100%, When the experiment enters Running state, Then eligible exposures are allocated to each variant within ±1% of the defined split over any rolling window of 10,000 exposures or 24 hours (whichever occurs first). Given a ramp schedule with steps (e.g., 10% -> 50% -> 100%) and times, When a ramp milestone time is reached and all guardrails are green, Then the platform increases traffic to the next step within 5 minutes and records the change in the schedule history. Given preview mode is used for QA, When a user previews a specific variant, Then no exposure, metric, or traffic allocation counters are incremented. Given a variant is manually paused by the educator, When the pause is confirmed, Then its allocated traffic is redistributed proportionally across remaining active variants within 5 minutes and the redistribution is logged. Given an experiment has a scheduled start and end time, When the start time is reached, Then the experiment auto-starts without manual intervention; When the end time is reached, Then the experiment auto-stops and prevents further assignments.
Deterministic User Bucketing and Stickiness Across Devices
Given a caregiver has a stable user_id and an experiment_id, When bucketing occurs, Then the caregiver is assigned deterministically via a salted hash and remains in the same variant across sessions and devices until the experiment ends. Given two experiments run concurrently, When bucketing is performed, Then each experiment uses a distinct salt so assignments are independent across experiments. Given a user has been assigned to a variant, When they become temporarily ineligible due to filters, Then they receive no treatment while ineligible but retain the same assignment if they later regain eligibility. Given QA override headers or roles are present, When a test admin forces a variant, Then the session is excluded from all experiment metrics and exposure counts.
Eligibility Filters and Holdout Group Enforcement
Given inclusion/exclusion filters by organization, payer, form tags, triggers, and locale are configured, When the experiment is running, Then only users and events that match the filters are eligible for assignment. Given a holdout group percentage is configured (e.g., 10%), When eligible users are bucketed, Then the specified percentage are assigned to holdout, receive no treatment cards, and are tracked as control for readouts. Given a user becomes ineligible mid-experiment, When they next trigger an exposure event, Then they receive no treatment and are excluded from new metric contributions; historical contributions remain attributed. Given changes are made to filters or holdout percentage, When the changes are saved, Then they apply only to new assignments and are recorded in the audit log with timestamp, actor, and diff.
Mutually Exclusive Experiment Groups and Conflict Resolution
Given experiments A and B are assigned to the same exclusivity group, When a user is assigned to A, Then that user is ineligible for B until A ends or the user is explicitly removed from A. Given a user simultaneously qualifies for two experiments in the same exclusivity group, When conflict resolution runs, Then the system assigns the user to the higher-priority experiment and logs the decision with experiment IDs and timestamp. Given experiments belong to different exclusivity groups, When eligibility is evaluated, Then a user may be assigned to one experiment per group concurrently. Given an experiment’s exclusivity group is changed, When the change is saved, Then the new rule applies to future assignments without retroactively reassigning existing users.
Guardrail Monitoring and Automatic Pause/Rollback
Given guardrail thresholds are configured (e.g., added latency p95 ≤ 400 ms, error rate ≤ 1.0%, client crash delta ≤ 0.2%), When any threshold is breached for a rolling 15-minute window with at least 1,000 exposures, Then the affected variant is automatically paused, traffic is redistributed, and alerts are sent to owners. Given all variants except holdout are auto-paused by guardrails, When redistribution cannot proceed, Then the experiment is auto-paused and marked At Risk in the dashboard. Given a paused variant recovers below thresholds for 30 consecutive minutes, When an owner manually resumes it, Then guardrail status returns to green and resumption is logged; no auto-resume occurs without human action. Given guardrails are disabled for an experiment, When a breach would have occurred, Then no auto-pause happens but alerts still fire and the dashboard shows a warning badge.
Experiment Readout: Metrics, Confidence, and Sample Size Progress
Given primary and secondary metrics are defined from CarePulse event schemas, When the experiment is running, Then per-variant exposure counts, metric values, absolute and relative lift, and 95% confidence intervals are displayed and update at least every 15 minutes. Given attribution windows are configured (e.g., 7-day click/1-day view), When outcomes occur within the window after first exposure, Then they are attributed to the assigned variant; outcomes outside the window are excluded. Given a minimum sample size or power target is set, When progress is below target, Then the readout shows “Insufficient sample” with progress percentage; when target is met, Then confidence indicators and decision recommendations are shown. Given an educator exports results, When export is requested, Then CSV and JSON files are generated within 60 seconds including definitions, filters, holdout share, time range, and metric estimates.
Progressive Delivery via Rollout Scheduler Integration
Given an experiment is scheduled to start with an initial ramp (e.g., 10%), When the scheduled time arrives, Then the experiment transitions to Running and begins assigning traffic at the initial ramp within 5 minutes. Given a freeze window is configured during critical periods, When the freeze window is active, Then traffic allocations and variant configurations are read-only and attempted changes are rejected with an explanatory error. Given maintenance blackout periods are defined, When a ramp step would occur during a blackout, Then the ramp is deferred to the next available window and the deferral is logged. Given an end date and a metric lag window are configured, When the end date elapses, Then new assignments stop immediately and readouts continue to accrue outcomes only until the lag window closes, after which the experiment is auto-archived.

Miss Heatmap

Aggregates misses by field, payer, team, shift, and device type to reveal friction hot spots. Recommends new cards and quantifies lift in note completeness and correction time, helping leaders target training where it counts.

Requirements

Miss Event Ingestion & Classification
"As a data analyst, I want all visit and note misses to be normalized and classified in near real time so that the heatmap reflects accurate, up-to-date hotspots across teams and payers."
Description

Implement a real-time, fault-tolerant pipeline that captures, normalizes, and classifies “miss” events from scheduling, visit notes, voice-to-text, IoT sensors, device telemetry, and compliance checks. Define a canonical miss schema (type, severity, timestamp, visit/user/team/payer identifiers, device type, shift, field identifier) and rules to detect incomplete fields, missing signatures, late/early check-in/out, sensor discrepancies, and correction cycles. Provide idempotent processing, deduplication, and backfill jobs, with data quality metrics and monitoring to ensure accuracy before downstream aggregation.

Acceptance Criteria
Real-Time Miss Ingestion Latency SLA across Sources
Given live miss-related events are emitted from scheduling, visit notes, voice-to-text, IoT sensors, device telemetry, and compliance checks When the pipeline ingests and classifies these events under steady-state load of up to 50 EPS per tenant (500 EPS fleet-wide) Then the end-to-end time from receipt to persisted classified record is ≤ 5s P95 and ≤ 10s P99, with ≥ 99.9% successful processing and no data loss
Canonical Miss Schema Validation & Normalization
Given any source event that maps to a miss When normalized Then the stored record contains required fields: type, severity, timestamp (ISO 8601 UTC), visit_id, user_id, team_id, payer_id, device_type, shift, field_id, and schema_version And optional fields are present as null when unknown And shift is derived from timestamp and agency shift config And events missing any required field are rejected to DLQ with a validation error code and original payload retained
Rule-Based Classification: Incomplete Fields & Missing Signatures
Given a submitted visit note with one or more required fields empty When classified Then a distinct miss is created per missing field with type = "incomplete_field" and severity assigned per ruleset, including field_id and visit_id Given a visit note submitted without a captured e-signature from the required signer When classified Then a miss is created with type = "missing_signature" and appropriate severity and payer_id Given the caregiver completes the missing field(s) or adds the signature When reprocessed Then the original miss is marked corrected with correction_cycle_time computed (in seconds) and correction metadata stored; no duplicate open misses remain for the same field_id/visit_id
Temporal Classification: Late/Early Check-in/Check-out
Given a scheduled visit window and actual check-in/out timestamps When evaluated with default thresholds (late > 5 min, early > 5 min) and payer/team overrides where configured Then a miss is created with type = "late_check_in" or "early_check_out" etc., including delta_minutes and severity tiered (e.g., ≥ 15 min = high) And timezone/DST handling ensures correctness relative to visit locale And no miss is emitted if within grace thresholds or if an approved exception flag is present
IoT Sensor Discrepancy Detection & Correlation
Given presence/location IoT sensor readings associated with a client address When the difference between sensor presence window and recorded check-in/out exceeds 5 minutes or location mismatch exceeds 100 meters Then a miss is emitted with type = "sensor_discrepancy" including sensor_id, discrepancy_reason, and measured deltas And sensor event bursts are debounced within a 60s window to avoid duplicates And if no visit is yet present, the event is held for correlation up to 24h and linked upon backfill; otherwise routed to DLQ with reason = "orphan_sensor_event"
Idempotent Processing, Deduplication, and Ordering Guarantees
Given duplicate or retried source events with the same dedup_key (source_system_id + source_event_id + event_time) When processed within a 72h dedup window Then only one classified miss record exists, and processing is idempotent across at-least-once delivery and retries And for a given visit_id, events are applied in event_time order with tolerance for out-of-order arrival up to 10 minutes And replays do not create additional misses or alter correction_cycle_time except when ruleset_version changes
Backfill, Reprocessing, and Data Quality Monitoring
Given a requested historical date range for a tenant When backfill runs Then it processes ≥ 10k events/minute per worker, respects dedup rules, and catches up to real-time with zero data loss And upon classification ruleset_version updates When reprocessing is triggered Then new classifications replace prior ones with lineage tracked, avoiding duplicate open misses And platform exposes metrics: ingestion_count, classified_count, DLQ_count, dedup_rate, P95/P99 latency, accuracy_sample_precision ≥ 99.5% on labeled test set And alerts fire within 2 minutes when DLQ_count/hour > 0.1% of input or P99 latency > 10s
Multi-Dimensional Aggregation Engine
"As an operations manager, I want to slice miss rates by field, payer, team, shift, and device type over time so that I can pinpoint where friction is concentrated."
Description

Build an incremental aggregation layer that computes counts, rates, severity-weighted scores, and time-to-correct distributions for misses across dimensions (field, payer, team, shift, device type) and time windows (hour/day/week/month). Support filters, comparisons (period-over-period, cohort vs. org), and top-N hotspot queries with sub-2s query latency on mobile. Include caching, pre-aggregation for common queries, data freshness indicators, and export endpoints to feed other CarePulse modules and audit reports.

Acceptance Criteria
Mobile Hotspot Query Latency Under 2s
Given a mobile client on 4G/LTE (≥5 Mbps, RTT ≤200 ms) authenticated to an org with up to 24 months of data and ≤250k miss events, When requesting any supported aggregation across one dimension (field/payer/team/shift/device_type) and a window (hour/day/week/month) with optional filters, Then the API responds HTTP 200 with a complete JSON payload within 2.0s p95 and 3.0s p99 server-side over 10,000 requests, and payload size ≤500 KB, and repeated requests with identical parameters return identical results.
Aggregation Accuracy Across Dimensions and Windows
Given a test dataset with known ground-truth visits, misses, and corrections and a severity weight schema w(minor=1, major=3, critical=5), When computing counts, miss rates, severity-weighted scores, and time-to-correct distributions grouped by each dimension and window, Then counts match ground-truth exactly; miss rates differ by ≤0.1% absolute; weighted scores match reference to two decimals; p50/p90/p95 time-to-correct are within ±1 minute of reference; and child group totals roll up to parent totals within rounding tolerance.
Incremental Updates with Freshness Indicator
Given new miss and correction events arrive up to 50 events/sec, When events are ingested, Then incremental aggregates update and are queryable within 5 minutes p95 of event time; late-arriving events up to 7 days old backfill within 15 minutes; deletions/updates are reflected exactly-once; and a "Last updated" timestamp equals the max processed event time within ±60 seconds.
Filter, Period-over-Period, and Cohort vs Org Comparisons
Given the aggregates endpoint supports parameters dimension, time_window∈{hour,day,week,month}, date_range, filters {field_ids[], payer_ids[], team_ids[], shifts[], device_types[]}, and compare {period_over_period, cohort_vs_org}, When no filters are provided, Then results are org-wide. When multiple values are provided for a single filter, Then they are ORed within the filter and ANDed across different filters. When period_over_period is requested, Then current, previous, absolute_change, and percent_change are returned using the immediately preceding equal-length period and identical denominators, rounded to two decimals. When cohort_vs_org is requested with team_ids specified, Then cohort and org metrics plus delta and percent_delta are returned using org as baseline. Then all comparison queries meet ≤2.0s p95 latency.
Top-N Hotspot Ranking by Weighted Score and Miss Rate
Given a request with top_n n∈[3,50], metric∈{weighted_score, miss_rate}, and dimension∈{field,payer,team,shift,device_type}, When executed for a valid date_range and time_window, Then the response returns exactly n items (or all if <n) sorted by metric desc, tie-broken by miss_count desc then name asc; each item includes miss_count, visit_count, miss_rate, weighted_score, avg_time_to_correct, p90_time_to_correct, and rank; ordering is stable for identical queries; and latency is ≤2.0s p95.
Pre-aggregation and Caching for Common Queries
Given a configured set of common queries (top_n by each dimension for last week and last month, and weekly period_over_period org-wide), When the system is warm under a 24h production-like load of 10,000 queries, Then cache hit rate ≥70% and p95 latency for cache hits ≤800 ms. When a new batch of events is ingested, Then affected cached/pre-aggregated entries are invalidated or refreshed within 2 minutes. After a cold start, Then initial pre-aggregation for the last 30 days completes within 15 minutes for an org with ≤100k visits and ≤10k misses and queries do not serve partial results without a clear partial indicator. Then cache memory usage stays within configured limits and is observable via metrics.
Export Endpoints for Aggregates and Audit Reports
Given an authenticated request to /exports/miss-aggregates with format∈{json,csv}, date_range, time_window, dimension, filters, and include_comparisons, When the request targets up to 100k rows, Then the service streams the file with HTTP 200 within 60 seconds, sets Content-Type and Content-Disposition, and includes a metadata header row/section (generated_at UTC, time_window, date_range, filters, freshness_timestamp, version). Then exported aggregates match API/on-screen aggregates for identical parameters within the accuracy tolerances. Then large exports paginate or chunk with file size ≤50 MB and support resume via continuation token. Then access requires scope "reports:read" and emits an audit log entry.
Interactive Heatmap & Drilldowns
"As a field supervisor, I want an interactive heatmap with drilldowns to specific visits so that I can quickly identify root causes and coach staff."
Description

Deliver a responsive heatmap UI that visualizes hotspot intensity with accessible color scales, tooltips, and trend indicators. Enable filtering by date range, dimension, miss type, and severity; provide drilldowns from cells to affected visits, users, and raw miss events with breadcrumb navigation. Support saved views, shareable links respecting role-based access, and offline-ready snapshots for field supervisors. Ensure mobile-first performance (<1.5s initial render on median devices) and WCAG AA accessibility.

Acceptance Criteria
Responsive & Accessible Heatmap UI
Given an authenticated user opens the Miss Heatmap on a mobile device (320–768px width) When the heatmap loads Then the heatmap grid and color legend render without layout overflow beyond the grid And each cell is tappable/clickable and shows a tooltip on tap/click or on keyboard focus And tooltips remain within the viewport, include miss count and trend indicator, and close on blur or Escape And the color scale is color-blind-safe and each cell exposes a numeric value label visible on focus/hover And all interactive elements are reachable via keyboard with a visible focus indicator And interactions meet WCAG 2.2 AA criteria including 1.4.1, 1.4.3, 1.4.11, 2.1.1, 2.4.3, 2.4.7, 1.3.1, and 4.1.2
Filtering by Date Range, Dimension, Miss Type, and Severity
Given the default date range is the last 30 days and no filters are applied When the user applies any combination of filters (date range, dimension, miss type, severity), including multi-select where applicable Then the heatmap updates to reflect only records matching the filters And active filters are displayed as removable chips and persist across navigation within the session And clearing all filters resets the heatmap to the default state And the selected dimension determines the grouping shown on the heatmap axes
Cell Drilldowns with Breadcrumb Navigation
Given a populated heatmap cell is visible When the user selects the cell Then a drilldown view opens listing affected visits by default with total count and essential attributes And the user can switch tabs to Users and Raw Miss Events to see related records And a breadcrumb trail displays Heatmap > [Dimension/Value] > [Drilldown] And selecting a breadcrumb level navigates back while preserving previously applied filters and the user’s scroll position
Trend Indicators per Cell
Given a date range is applied When the heatmap renders each cell Then the cell shows a trend arrow (up/down/flat) and percent change versus the immediately preceding equal-length period And the trend includes an accessible text alternative (e.g., "Up 12% versus prior period") And if no prior period exists or the prior count is zero, the cell displays "No prior data" without an arrow
Saved Views & Shareable Links Respect RBAC
Given a user configures filters, dimension, and date range on the heatmap When the user saves the configuration as a named view Then the view is added to Saved Views and restores the exact state when loaded And when the user generates a shareable link for the current view Then opening the link loads the encoded view state for any viewer And only data permitted by the viewer’s role-based access is displayed; unauthorized viewers see an access-denied state with no data
Offline-Ready Snapshot for Field Supervisors
Given a user with Supervisor role is online and viewing the heatmap When the user taps "Download Snapshot" Then a snapshot of the current heatmap (legend, cell values, active filters, timestamp) is cached for offline use And when the device is offline, the user can open the snapshot from the Miss Heatmap screen And the snapshot renders the heatmap with static cell labels; tooltips, drilldowns, and sharing are disabled And a banner indicates "Viewing offline snapshot from [timestamp]"
Mobile Initial Render Performance (<1.5s)
Given a representative median device per the team’s performance profile on a 4G network profile (~100 ms RTT, ~10 Mbps) When the user first opens the Miss Heatmap with a typical dataset and a cold cache Then the initial heatmap render with interactive filters completes within 1.5 seconds from navigation start
Hotspot Threshold Alerts & Subscriptions
"As a QA lead, I want to set thresholds and subscribe to hotspot alerts so that I’m notified early when metrics exceed acceptable limits."
Description

Add configurable thresholds and anomaly detection for hotspot metrics with in-app, email, and push notifications. Allow users to subscribe to dimensions (e.g., payer X, team Y, night shift) and set schedules (immediate, daily digest, weekly). Include noise controls (hysteresis, cooldowns, suppression windows), routing rules by role, and one-click navigation from alerts to the corresponding heatmap view. Log alert deliveries and outcomes for auditing.

Acceptance Criteria
Threshold-Based Alerting by Dimension
Given an admin configures a miss-rate threshold alert for Payer X + Team Y + Night Shift + Device Type Android with T_open = 15% And selects delivery channels: in-app, email, and push And at least one user is subscribed to that dimension When the computed miss rate for that exact dimension crosses above 15% during the selected evaluation window Then an alert event is created and delivered via all selected channels to subscribed users within 5 minutes of detection And the alert body contains metric name, current value, threshold, dimension labels, detection timestamp, and a deep link to the Miss Heatmap pre-filtered to that dimension and time window
Anomaly-Detection Alerts
Given anomaly detection is enabled for metric Note Correction Time on Team A + Night Shift with severity threshold = High And at least one user subscribes to anomalies on that dimension with schedule = Immediate When the anomaly detection service emits an anomaly_detected event for that dimension with severity High or higher Then an alert is delivered to the subscribed users within 5 minutes via their selected channels And if the event severity is below High, no alert is sent And the alert displays observed value, baseline value, percent deviation, and a deep link to the Miss Heatmap pre-filtered to the same dimension and lookback window
Subscriptions and Schedules: Immediate, Daily, Weekly
Given a user subscribes to Payer X and Team Y with Immediate alerts enabled, Daily digest at 08:00, and Weekly digest Mondays at 09:00 in the user’s time zone When qualifying threshold or anomaly events occur Then immediate alerts are delivered within 5 minutes of detection And the daily digest aggregates all events from 00:00–23:59 of the previous day and is delivered at 08:00 local time And the weekly digest aggregates events from Monday–Sunday of the prior week and is delivered at 09:00 local time on Monday And the user can unsubscribe per dimension or pause all alerts; paused dimensions generate no alerts and show status = Paused in subscriptions
Noise Controls: Hysteresis, Cooldowns, Suppression Windows
Given thresholds are configured with T_open = 15% and T_close = 12% for miss rate on Payer X + Team Y + Night Shift And a cooldown of 60 minutes and a suppression window of 21:00–06:59 are set in the org’s time zone When the metric crosses above 15% and an alert is sent Then subsequent samples that remain between 12% and 15% do not trigger additional alerts And no additional alert is sent for that dimension until after the 60-minute cooldown and a new crossing above 15% occurs And any alerts that would occur during 21:00–06:59 are suppressed from email/push and included in the next eligible digest or delivered when the suppression window ends, with the suppression noted in the alert metadata
Role-Based Routing and Access Controls
Given routing rules map Miss Rate alerts to Operations Managers and Correction Time alerts to Team Leads And only users with permission to view the Miss Heatmap for Team Y are eligible recipients When an alert is generated for Team Y Then only subscribed users in eligible roles receive the alert And users lacking permission to the dimension receive no alert and the event is logged as suppressed by ACL And recipients who belong to multiple eligible roles receive only one alert per channel (deduplicated)
One-Click Deep Link to Heatmap Context
Given an alert includes a deep link to the Miss Heatmap When a recipient opens the link from in-app, email, or push Then the app opens to the Miss Heatmap with filters applied for metric, payer, team, shift, device type, and the relevant time window And the view focuses the corresponding cell/row and displays an Alert ID context banner And if the user is not authenticated, the app prompts for login and then completes navigation to the same filtered view And if a filter no longer exists (e.g., deleted team), the heatmap loads at the nearest valid scope with a notice of the fallback
Alert Delivery and Outcome Audit Log
Given alerting is enabled When any alert is generated, attempted, delivered, opened, clicked, or suppressed Then an immutable audit record is stored containing alert ID, type (threshold/anomaly/digest), metric, dimension values, recipient user IDs and roles, channel, event timestamps, provider response IDs, and outcome (sent, delivered, opened, clicked, failed, suppressed) And audit records are viewable and filterable by date range, dimension, recipient, channel, and outcome in the admin UI And audit records are exportable to CSV via the admin UI
Intervention Recommendations (Smart Cards)
"As a training manager, I want the system to recommend targeted workflow cards per hotspot so that caregivers get just-in-time guidance that reduces misses."
Description

Generate data-driven recommendations that propose targeted workflow cards and micro-coaching content for hotspots (e.g., payer-specific documentation tips, device checklists, shift-based reminders). Rank recommendations by expected impact and effort, show rationale (top contributing fields/teams), and enable one-click publish to CarePulse surfaces (home, pre-shift, in-note). Capture user feedback and outcomes to refine models and avoid recommendation fatigue.

Acceptance Criteria
Hotspot-Triggered Recommendation Generation
- Given Miss Heatmap detects a hotspot with ≥20 misses in the last 7 days for a specific payer/shift/team/field/device type, when an Operations Manager opens the Recommendations tab, then the system generates 3–10 Smart Card recommendations targeted to that context within ≤5 seconds. - Each recommendation includes: title, target surface(s) [home | pre-shift | in-note], audience filters (team, shift, payer, device type), micro-coaching content ≤300 characters and optional 30–60 second voice clip, and a device checklist if device-related. - Each recommendation displays Impact (0–100), Effort (0–100), and a computed Priority rank; list is sorted by Priority descending with deterministic tie-breaks. - Each recommendation shows rationale with the top 3 contributing fields/teams and their miss contribution percentages that together account for ≥80% of hotspot misses. - Users can drill to ≥5 sample miss instances with timestamps and anonymized caregiver IDs.
One-Click Publish to CarePulse Surfaces
- Given a user with Publish permission selects 1–5 recommendations, when clicking Publish, then publish completes in ≤2 seconds per recommendation and a success confirmation is shown. - Published cards appear for targeted caregivers: Home within ≤60 seconds, Pre-shift in the next scheduled shift’s queue, and In-note when the relevant field is focused. - Publish flow supports scheduling (start/end), audience scoping (team(s), shift(s), payer(s), device type), and surface-specific previews before confirm. - An audit log entry is created per publish with user, timestamp, audience size, surfaces, and card version; retrievable in Admin > Audit within ≤10 seconds.
Feedback Capture and Fatigue Control
- Caregivers can rate each card as Useful or Not Useful and optionally add a comment up to 280 characters; the feedback UI loads in ≤500 ms. - Users can Dismiss or Snooze (24h/7d); snoozed cards do not reappear before expiration. - Per user, new recommendation exposures are capped at ≤3 per shift; no duplicate or near-duplicate cards (≥85% text similarity) within 14 days. - Feedback records include user ID, card ID, timestamp, surface, and rating; 99% durability and available to analytics within ≤15 minutes. - If a card receives <30% Useful after ≥50 views, its Priority is auto-downgraded and a deprecation suggestion is shown to managers. - Feedback and outcome signals adjust recommendation rankings/models within ≤7 days; admins can see the new model version and change log.
Outcome Measurement and Lift Attribution
- For each published card, the system computes changes in note completeness rate and average correction time using a 14-day pre-publish baseline versus a 14-day post-publish evaluation window. - Metrics display absolute and relative deltas with sample sizes; 95% CIs shown when n≥100, otherwise flagged Low Confidence. - Lift attribution uses difference-in-differences against matched control cohorts when available; method and cohort sizes are displayed. - Outcome reports refresh daily by 03:00 local time; on-demand refresh allowed ≤1 time per hour with results in ≤3 minutes.
Explainability and Rationale Transparency
- Each recommendation includes a Rationale panel listing top 3 contributing fields/teams/shifts/payers with their miss shares and links to representative records. - A data freshness timestamp is shown and is ≤24 hours old; if older, a stale indicator appears and generation is blocked until refresh. - Users can expand to view top 10 feature importances and the model version used; a link to explanation docs is provided.
Permissions, Privacy, and Compliance Controls
- Only Operations Manager (or higher) roles with Publish permission can publish; caregivers see only cards targeted to them; unauthorized actions return 403 with no PHI leakage. - Displays and exports redact PHI to the minimum necessary; anonymized caregiver identifiers are shown by default. - All generate/view/publish/archive actions are immutably logged and retained for ≥7 years. - Data in transit uses TLS 1.2+ and at rest AES-256; access logs are accessible to compliance within 1 business day of request.
Reliability, Performance, and Error Handling
- Recommendation generation success rate ≥99% over rolling 24 hours; latency ≤2s P50 and ≤5s P95. - On failure, users see a clear error with a retry option and correlation ID; automatic retries occur up to 2 times with exponential backoff. - When no hotspot meets thresholds, a labeled fallback set of best-practice cards is shown instead of an empty state. - Under 3× normal load, the system remains responsive with no timeouts >10 seconds and maintains data consistency.
Outcome Lift Measurement & Reporting
"As a compliance director, I want to quantify lift in completeness and correction time after interventions so that I can justify training investments and satisfy auditors."
Description

Provide built-in experimentation and cohort analysis to quantify lift from deployed interventions: define baselines, compare pre/post and control vs. exposed groups, and report changes in note completeness, correction time, and miss severity. Include confidence indicators, trend charts, payer/team breakouts, and one-click export to audit-ready PDFs/CSVs and compliance report packs used elsewhere in CarePulse.

Acceptance Criteria
Baseline and Cohort Definition
Given an Ops Manager selects a date range, inclusion rules (payer, team, shift, device type), and metrics (note completeness %, median correction time, miss severity index) for a new experiment baseline When they click Save Baseline Then the system locks and versions the baseline with a unique ID, stores cohort rules, metric definitions, and timestamp And the baseline metrics snapshot is computed within 60 seconds and matches recalculation within ±0.1 percentage points (completeness) and ±1 minute (correction time) And the UI displays the saved Baseline ID and period
Pre/Post and Control vs Exposed Lift Calculation
Given an intervention start date and exposure rules are defined for Exposed and Control groups When I run a Pre vs Post comparison for both groups Then the system computes absolute and relative lift for note completeness %, median correction time (minutes; improvement shown as negative), and miss severity index And displays 95% confidence intervals and p-values, indicating significance when p < 0.05 And shows N for each cohort/period and excludes incomplete records per documented rules And CSV export reproduces on-screen aggregates within ±0.1 pp and ±1 minute
Confidence Indicators and Sample Guardrails
Given a lift analysis is configured When Exposed N < 30 or Control N < 30 or either period length < 7 days Then the significance badge is suppressed and an “Insufficient sample” tooltip is shown And confidence intervals are still displayed and labeled Low Confidence And a recommendation banner suggests extending the window or broadening filters
Trend Charts with Drilldowns
Given filters and comparison settings are applied When I view the Trends tab Then the chart renders per selected granularity (day/week/month) using the organization timezone, with Pre and Post segments clearly delineated And tooltips show metric value, N, and CI bounds for each point And toggling a breakout or metric updates the chart within 500 ms (cached) or 3 s (uncached) And clicking a data point opens a drilldown table for that bucket with matching counts
Payer and Team Breakouts
Given payer/team breakouts are enabled When I select multiple payers and teams Then tables and charts display separate series/rows per selection and an All row And weighted aggregates match the All row within ±0.1 pp (completeness) or ±1 minute (correction time) And sorting, pagination, and search operate across the filtered set And exporting includes payer/team columns and preserves sort order
One-Click Export to PDF/CSV and Compliance Packs
Given an analysis view is configured with filters and cohorts When I click Export PDF, Export CSV, or Send to Compliance Pack Then files generate within 10 s for ≤ 100k rows and within 60 s for ≤ 2M rows And the PDF includes headers, applied filters, cohort definitions, statistical method, timestamps, and signature block, formatted for A4/Letter And the CSV row counts and aggregates match the on-screen totals And a Compliance Pack entry is created and visible in Reports with the correct metadata and attachments
Privacy-Preserving Analytics Layer
"As a security officer, I want analytics to use de-identified, access-controlled aggregates so that insights remain compliant and protect PHI."
Description

Enforce HIPAA-aligned analytics practices: aggregate or de-identify data for heatmap views, apply minimum cell-size thresholds, and mask PHI in UI and exports. Implement role-based access controls, per-tenant data isolation, encryption in transit/at rest, audit logs for access and export, and configurable data retention. Provide a privacy impact configuration panel to align with agency policies without degrading analytic utility.

Acceptance Criteria
Minimum Cell-Size Threshold Enforcement
Given tenant T has set a minimum cell size threshold of N in Privacy Settings And a heatmap query returns one or more cells with counts less than N When the heatmap is rendered Then each such cell displays as the masked string "<N" and shows no raw count And drill-down and row-level exports for those masked cells are disabled And grand totals and subtotals do not permit back-calculation of any count less than N When the same query is exported Then the export file contains masked values for those cells and excludes row-level detail for them When a user changes filters or date ranges Then the threshold is enforced consistently for all resulting cells
PHI Masking in Analytics UI and Exports
Given the Miss Heatmap and related analytics pages When results are displayed Then no direct identifiers are shown, including patient or caregiver full name, phone, email, date of birth, SSN, medical record number, street address, or voice transcript content And any configured quasi-identifiers are generalized or removed according to policy (for example, age to age band, ZIP to 3-digit where required) When an export is generated Then the export contains only de-identified fields and aggregated metrics required for the view And a PHI dictionary scan of the export returns zero matches for blocked fields When a user attempts to include PHI columns via a column picker Then those columns are not available to select
Role-Based Access Control for Analytics
Given roles and permissions exist: Analytics.View, Analytics.Export, Privacy.Configure When a user without Analytics.View navigates to Miss Heatmap Then the navigation entry is hidden and direct URL access returns 403 When a user with Analytics.View but without Analytics.Export opens Miss Heatmap Then the Export control is not visible and export API calls return 403 When a user with team-scoped access views Miss Heatmap Then only data for their assigned teams and payers is included When a permission is revoked Then access changes take effect within 5 minutes without requiring user re-login
Per-Tenant Data Isolation
Given a multi-tenant environment with tenants A and B And a unique sentinel record exists in tenant B When an authenticated user from tenant A runs any Miss Heatmap query or export Then the sentinel record and all tenant B data are absent from results And cross-tenant API access using tenant A credentials returns 403 or 404 for tenant B resources When cached analytics are served Then caches are tenant-scoped and never mix tenant A and tenant B data When synthetic cross-tenant probes are executed Then no data from other tenants is returned
Encryption In Transit and At Rest
Given the analytics services and data stores When traffic is initiated to or from the analytics API Then TLS 1.2 or higher is enforced and plaintext HTTP requests are rejected or redirected to HTTPS When database volumes and object storage used for analytics and exports are inspected Then encryption at rest is enabled with AES-256 or provider-equivalent When keys are rotated via the key management service Then services continue to function and new data is encrypted with the new keys When a TLS scan is performed against the analytics endpoints Then only strong cipher suites are accepted and certificates are valid and unexpired
Audit Logging for Analytics Access and Export
Given a user views Miss Heatmap, changes privacy settings, or generates an export When each action occurs Then an immutable audit event is recorded within 5 seconds containing tenant_id, user_id, action type, resource, timestamp in UTC, request origin, applied filters, and outcome And audit events are write-once and tamper-evident, and deletions are not permitted via application APIs When an admin queries the audit log for the action Then the event can be retrieved by time range and correlated to the generated export file identifier if applicable When audit logging is unavailable Then the action is denied and the user is shown a non-destructive error
Privacy Impact Configuration Panel
Given an admin with Privacy.Configure permission opens the Privacy Impact panel When they adjust minimum cell size, PHI masking rules, export permissions, and data retention windows Then the UI displays a real-time preview estimating the percentage of heatmap cells that will be masked and the expected change in note completeness reporting And changes require confirmation via Save and are versioned with who, when, and a change summary And the new configuration applies to analytics within 10 minutes and is recorded in the audit trail When retention is set to R days Then a nightly job purges or irreversibly anonymizes analytics records older than R days, excluding audit logs if policy requires When retention is decreased Then a dry-run impact count is shown before confirmation

Audit Trace

Links each nudge to the resulting correction, time/location context, and final EVV record. One‑click exports provide a clean evidence trail for QA and payer audits, proving timely corrective action and lowering denial risk.

Requirements

Nudge-to-Correction Linkage
"As a QA reviewer, I want each nudge automatically linked to the exact corrective actions and final EVV record so that I can verify compliance quickly and defend claims during audits."
Description

Automatically bind each system-generated or manual nudge to its subsequent corrective action(s) and the final EVV event to form a complete, canonical evidence chain. Listens to events from scheduling, EVV, documentation, routing, and mobile actions, matching by visit, caregiver, time window, and geofence tolerances. Supports multi-step remediation, partial fixes, superseded actions, and retries, maintaining normalized relationships and versioning. Outputs linkage objects consumable by UI timelines and export services, delivering end-to-end traceability that speeds QA validation and reduces claim denials.

Acceptance Criteria
Happy Path: Single Nudge → Single Correction → Final EVV
Given a system-generated nudge N for Visit V assigned to caregiver C at timestamp t0 with reason "Missed Clock-In" and configured tolerances T_time_window_minutes and T_geofence_meters And a scheduled visit window W_start..W_end and a visit geofence G When a corrective action A of type "Clock-In Correction" by caregiver C occurs at timestamp t1 within W_start - T_time_window_minutes .. W_end + T_time_window_minutes and at location within T_geofence_meters of G And a final EVV record E for Visit V with status "Verified" is produced at timestamp t2 Then the system creates exactly one linkage object L linking N -> [A] -> E with visit_id=V and caregiver_id=C And L includes ordered timestamps [t0, t1, t2], location context for N and A, and outcome="Resolved" And L is persisted and available to UI timeline and export services within 5 seconds of E creation And no duplicate linkage objects exist for N
Multi-Step Remediation: Ordered Actions Resolve Nudge
Given a nudge N for Visit V and caregiver C at timestamp t0 When corrective actions A1 "Route Update" at t1, A2 "Supervisor Approval" at t2, and A3 "Clock-In Correction" at t3 occur in that order within configured tolerances And a final EVV record E with status "Verified" is produced at t4 Then one linkage L is created with N -> [A1, A2, A3] -> E And L records actions in strict order with sequence indexes 1..3 and action statuses "Applied" And L outcome="Resolved" and resolution_steps=3 And L computes latency metrics: time_to_first_action=t1 - t0 and time_to_resolution=t4 - t0
Superseded Actions and Versioning
Given a nudge N for Visit V at timestamp t0 And corrective action A1 "Clock-In Correction" is submitted at t1 with value X When corrective action A2 "Clock-In Correction" is submitted at t2 for the same field with value Y that supersedes A1 Then linkage L contains both actions with A1.status="Superseded" and A1.superseded_by=A2.id And L.version increments by 1 upon A2 application and preserves immutable history of A1 And computed resolution uses A2 values only And if a final EVV E for V is produced after t2, L links to E and outcome="Resolved"; otherwise L.outcome="Pending Correction"
Idempotent Retries and Duplicate Prevention
Given a nudge N for Visit V and caregiver C and an idempotency key K for corrective action payload P When the client submits P with key K multiple times (n >= 2) due to retries Then exactly one corrective action entity A is stored And A is linked exactly once within linkage L for N And subsequent identical submissions return the same action_id=A.id and success response without creating new records And linkage metrics (counts, durations) remain unchanged by retries
Cross-Source Matching and Disambiguation
Given events related to Visit V and caregiver C exist across scheduling, EVV, documentation, routing, and mobile sources within visit window W and geofence G When a nudge N is generated and corrective actions occur across these sources Then the matcher links events into the same chain only when all apply: visit_id=V, caregiver_id=C, timestamp within W extended by T_time_window_minutes, and location (if present) within T_geofence_meters of G And when multiple candidate visits match, the system selects the candidate with the smallest absolute timestamp difference to the nudge; if tied, selects the one with the highest confidence score And the linkage records confidence_score >= configured threshold and the applied tie_breaker And non-selected candidates are not linked and are logged with reason="Ambiguous—Not Selected"
Tolerance Boundaries: Time Window and Geofence
Given T_time_window_minutes=15 and T_geofence_meters=150 for the organization and a nudge N for Visit V at t0 with geofence G and window W When a corrective action A occurs exactly at W_start-15 or W_end+15 minutes and at distance exactly 150 meters from G centroid Then A is considered in-tolerance and is eligible for linkage When a corrective action A' occurs at W_end+15 minutes+1 second or at distance >150 meters Then A' is not linked automatically and a candidate record is created with status="Out of Tolerance" and manual_review_flag=true
Exportable Evidence Chain for Audits
Given a resolved linkage L for Visit V containing nudge N, corrective actions [A1..An], and final EVV E When a user triggers One-Click Export for the visit or a date range including the visit Then the export produces a single evidence chain containing: visit_id, caregiver_id, nudge (id, type, timestamp, location), each corrective action (id, type, timestamp, user, previous_value, new_value, status), final EVV (id, status, timestamps, locations), and linkage metadata (outcome, version, confidence_score, tie_breaker, latency metrics) And the export is available in JSON and CSV formats And export generation completes in 3 seconds or less for a single visit And exported field values exactly match the persisted linkage (field-by-field equality)
Contextual Evidence Capture
"As a compliance officer, I want time and location context recorded for nudges and corrections so that I have objective evidence to satisfy payer audit criteria."
Description

Capture and persist comprehensive context for both nudges and corrections, including timestamps, GPS coordinates with accuracy metadata, geofence status, caregiver identity, device and app version, network state, and route segment. Validate geolocation integrity signals and handle online/offline modes with secure local queueing and clock-sync safeguards. Store context snapshots alongside linkage records to provide objective proof of timely corrective action and situational conditions, enabling stronger audit packages and root-cause analysis.

Acceptance Criteria
Capture Context on Nudge Emission
Given a signed-in caregiver with location permissions when the system emits a nudge for a visit event then the system persists a context snapshot linked to the nudgeId that includes: serverReceivedUtc, deviceLocalTime, gps.lat, gps.lng, gps.horizontalAccuracyMeters, gps.fixAgeSeconds, geofenceStatus (inside|outside|border), caregiverId, deviceModel, osVersion, appVersion, networkState (wifi|cellular|offline), routeSegmentId And the snapshot write succeeds in online mode within 200 ms p95 And gps.horizontalAccuracyMeters and gps.fixAgeSeconds reflect actual values even if outside targets (no nulls) And the snapshot has a unique immutable snapshotId
Capture Context on Correction Action
Given a caregiver initiates a correction (e.g., confirms arrival, adjusts timestamp, or accepts suggested fix) when the correction is submitted then the system persists a correction context snapshot containing the same fields as nudge snapshots plus correctionId, nudgeId, actionType, inputMode (voice|text|auto) And if the final EVV record is not yet available a pending linkage placeholder is stored and updated within 60 seconds after EVV finalization And the correction snapshot is linked to the originating nudgeId
Offline Queueing and Sync for Evidence Snapshots
Given the device is offline when a nudge or correction occurs then the context snapshot is written to a secure encrypted local queue with an idempotencyKey and monotonic sequence number preserving event order And queued snapshots persist across app restarts and OS reboots and are retained for at least 7 days And upon connectivity restoration snapshots sync to the server in FIFO order within 60 seconds p95 with exponential backoff and server-side idempotency preventing duplicates And if a snapshot cannot be synced due to validation errors it remains stored with syncStatus=failed and errorCode for QA review
Clock Drift Detection and Normalization
Given device and server clocks may drift when any context snapshot is saved then the server computes and stores clockDeltaSeconds=deviceLocalTime−serverReceivedUtc and a clockSyncFlag (ok|drifted) And if |clockDeltaSeconds|>5 the snapshot is marked drifted and normalizedEventTime=serverReceivedUtc is stored and used for audit ordering And both timestamps and delta are included in exports for audit transparency
Geolocation Integrity Validation and Flagging
Given a location fix is available when creating a context snapshot then the system validates integrity: mockLocationDetected=false, provider in (gps|network), horizontalAccuracyMeters≤50 or recorded as-is, fixAgeSeconds≤15, and speed≤150 km/h And if any check fails the snapshot is still stored with integrityStatus=failed and reasons[] listing each failed rule otherwise integrityStatus=passed And integrityStatus is included in all linkage views and exports
Snapshot Linkage to Final EVV Record and Export
Given a nudge and subsequent correction exist when the related visit is finalized and the EVV record is generated then both snapshots are linked to the evvRecordId with linkage timestamps And a one-click export (CSV and PDF) for the visit produces a single package within 3 seconds for ≤100 visits containing nudge snapshot, correction snapshot, linkage metadata, integrity flags, and clock deltas And exported records include a tamper-evident hash per snapshot and a package-level hash
Route Segment and Geofence Resolution
Given a caregiver is on a scheduled route when recording a context snapshot then routeSegmentId is resolved based on schedule time alignment (±15 minutes) and proximity to the planned geofence And geofenceId and distanceMetersToGeofenceEdge are recorded with geofenceStatus (inside|outside|border) And if a route segment cannot be resolved routeSegmentId is null and unresolvedReason is stored
One-Click Audit Export
"As an operations manager, I want to export a complete evidence trail with one click so that I can respond to audits quickly and consistently."
Description

Provide a single-action export that compiles linked nudges, corrective actions, contextual snapshots, and final EVV into audit-ready artifacts. Support configurable templates per payer and jurisdiction, outputting paginated PDFs and machine-readable CSV/JSON with index pages, preparer metadata, timestamps, digital signatures, and checksums. Enable presets, filters by date range/visit/payer, bulk export with queued processing, and artifact retention with access-controlled retrieval. Delivers consistent, rapid responses to audits while minimizing manual assembly.

Acceptance Criteria
Single-Click Evidence Bundle Generation
Given a user with 'Audit Export:Create' permission and an active preset scoped to the current view And the current view is Visit Details or Audit Trace with filters applied When the user clicks the One-Click Export button Then the system generates one export bundle tied to a unique Export ID And the bundle contains a paginated PDF, a CSV, and a JSON with the same dataset And the dataset includes, for each included visit: all linked nudges, corresponding corrective actions, contextual time/location snapshots, and the final EVV record And the export appears in Export History within 5 seconds of job creation with status "Processing" or "Complete"
Template Selection and Validation per Payer/Jurisdiction
Given payer and jurisdiction are known for the selected visits When initiating an export Then the system auto-selects the template mapped to each payer/jurisdiction combination And if multiple templates match, the user is prompted to choose before export proceeds And the system validates required fields per template before queuing And if any required field is missing, the export is blocked and the user sees a list of missing fields and affected visits And once resolved, export can be re-attempted without reconfiguring other options
Output Artifacts, Index, Metadata, Signatures, Checksums
Given an export job is completed successfully Then the PDF includes: cover page with agency name, preparer name/role, export ID, time zone, created-at timestamp; index page listing visits and artifact files; page numbers on all pages And the CSV and JSON include identical records and fields, with ISO 8601 timestamps including timezone offsets And a manifest file (manifest.json) lists every file name, file size, and SHA-256 checksum And the manifest and PDF are digitally signed with a server-held certificate and RFC 3161-compatible timestamp And on download, the system validates checksums and signature; if validation fails, the file is not delivered and an error is logged and shown
Filters and Presets
Given filters for date range, visit ID(s), and payer(s) When the user applies filters and saves them as a named preset Then the preset stores filters, selected template(s), and output options And when the user triggers One-Click Export, the last-used preset for the page context is applied by default And invalid filters (e.g., end date earlier than start date, non-existent visit ID) prevent export and surface inline error messages And the exported dataset includes only records matching active filters (verified by count parity between pre-export results and exported records)
Bulk Export and Queued Processing
Given the user selects a date range up to 30 days and up to 1,000 visits total When starting a bulk export Then the job is queued with a visible Job ID, creation time, and initial "Queued" status And the UI shows progress (percentage or counts), estimated time remaining, and per-visit failure counts And transient failures are retried up to 3 times; persistent failures are listed with reasons and are included in a "failures.csv" And 95% of bulk exports in this size range complete within 10 minutes in staging benchmarks; jobs exceeding 30 minutes are flagged "Delayed" and notify the user And the user receives an in-app notification and email with links when the job completes
Access-Controlled Retrieval and Retention
Given an export has completed Then artifacts are stored encrypted at rest and downloadable only by users with 'Audit Export:Read' permission And each download link is single-tenant, access-controlled, and expires after a configurable link TTL (default 7 days) And every access, validation failure, and deletion is captured in an immutable audit log with user, timestamp, IP, and Export ID And retention duration is configurable per payer/template; upon reaching retention end, artifacts are permanently deleted and the deletion event is logged; attempts to access deleted artifacts return 410 Gone And admins can extend retention before expiry with a recorded reason
Traceability and Data Completeness Across Artifacts
Given a visit is included in an export Then the PDF presents for that visit: chronological timeline of nudges and corrective actions, with timestamps and GPS coordinates where available, and the final EVV record status And CSV/JSON include stable IDs linking nudges to their corrective actions and to the final EVV record And if a nudge has no recorded correction, it is included with status "Unresolved" and reason metadata And cross-file counts reconcile: number of visits equals number of EVV records; number of nudge-correction links equals number of corrections; validation summary page lists totals and zero mismatches And all cross-references in PDF are clickable and resolve to the correct section/page
Immutable Evidence Ledger
"As a payer auditor, I want assurance that evidence records are tamper-evident so that I can trust the integrity of the submitted documentation."
Description

Record all nudge, correction, and linkage events into an append-only, tamper-evident ledger using hash chaining and server-enforced write-once storage. Generate per-record fingerprints surfaced in UI and included in exports to enable independent verification. Incorporate signed actor attribution, clock-drift detection, and retention/legal hold policies. This strengthens audit defensibility by ensuring the integrity and non-repudiation of evidence records across their lifecycle.

Acceptance Criteria
Append-Only Ledger Enforcement
- Given a valid authenticated request to create a nudge, correction, linkage, or EVV evidence event, When the server writes to the evidence ledger, Then the record is appended as a new entry and no existing ledger record is updated or deleted. - Given any attempt (API, admin UI, or direct DB) to update or delete an existing ledger record, When executed, Then the operation is rejected with a 403/permission error and a denial audit event is appended capturing actor, timestamp, and reason. - Given concurrent writes from multiple actors, When they occur, Then each write is assigned a strictly increasing sequence ID and persisted exactly once with no lost or duplicate entries. - Given a newly appended record, When queried by ID or by sequence range, Then it is retrievable and its immutable fields are read-only in all APIs and UIs.
Hash Chain Integrity and Fingerprint Generation
- Given a new ledger record is appended, When stored, Then a content fingerprint (SHA-256) and a prevHash referencing the prior record’s fingerprint are computed and saved. - Given any alteration of a stored record or its order, When chain verification runs, Then verification fails and the first mismatched link is identified by sequence ID and fingerprint. - Given an export of a visit’s audit trail, When an external verifier recomputes hashes over the exported payloads, Then all fingerprints and chain links validate with zero discrepancies. - Given a user views a record in the UI, When the details panel renders, Then the record fingerprint is displayed and copyable in full, with a shortened preview for quick reference.
Server-Enforced Write-Once Storage (WORM)
- Given a configured evidence retention period, When records are written, Then they are stored in a write-once, tamper-evident storage class with retention locks set through the full retention period. - Given any attempt to modify or delete a retained record before retention expiry, When executed, Then the storage layer rejects the operation and the application surfaces a clear error while appending a denial audit event. - Given read or export operations, When performed during retention, Then data is readable without requiring any mutable operations on the stored objects. - Given retention calculations, When evaluating expiry, Then trusted server time is used and client device time does not influence retention windows.
Signed Actor Attribution and Non-Repudiation
- Given an authenticated user or service creates an evidence event, When accepted by the server, Then the event stores actor ID, role, organization, and a server-side digital signature over the canonical event payload and actor claims. - Given a record is submitted with a missing or invalid signature, When append is attempted, Then the write is rejected and an error with remediation guidance is returned. - Given a ledger export, When a verifier validates signatures using the platform public key(s), Then every record’s signature validates and any failures are enumerated. - Given automated actions (e.g., rules engine) create events, When persisted, Then actor attribution reflects the automation identity and includes originating user/context when applicable.
Clock-Drift Detection and Flagging
- Given a client-submitted event includes a device timestamp, When the server receives it, Then the server computes offsetMs = deviceTime − serverReceiptTime and if |offsetMs| > 90,000 (configurable), the record is appended with driftFlag=true and includes deviceTime, serverTime, and offsetMs. - Given records with driftFlag=true, When viewed in UI or included in export, Then the drift flag and offset are visible and filterable. - Given extreme drift exceeding the configured hard limit (e.g., 10 minutes), When policy is set to block, Then the write is rejected with a specific error instructing the user to correct device time; otherwise it is appended but flagged per policy.
Linkage Completeness Across Nudge→Correction→EVV
- Given a caregiver submits a correction in response to a nudge, When the correction is recorded, Then the ledger record includes references to the originating nudge ID, associated visit ID, time/location context, and resulting EVV record ID if available. - Given a visit’s audit trail is exported, When export is triggered, Then the output contains the ordered sequence of nudge, correction, linkage, and final EVV records with their fingerprints and timestamps. - Given a linkage reference is missing or invalid, When append is attempted, Then the write fails validation with a clear message indicating the missing or invalid reference. - Given multiple corrections are submitted for a single nudge, When recorded, Then a many-to-one linkage is preserved with sequence order and a terminal resolution status on the final record.
Retention and Legal Hold Policy Enforcement
- Given a retention policy of N years is configured, When the purge job runs, Then only records older than N years without active legal holds are purged and a purge-receipt event is appended including a hash of purged record IDs. - Given a legal hold is placed on a visit, user, or organization scope, When active, Then targeted records are not purgeable regardless of age and purge attempts are rejected and audited. - Given an export is requested while a legal hold is active, When generated, Then the export includes hold metadata and is not blocked by the hold. - Given the retention policy configuration is changed, When saved, Then the change is recorded as a ledger event with actor attribution and applies prospectively to new records and to existing records not yet past their original retention if the policy extends retention, without shortening existing retention windows.
Audit Timeline & Deep Linking
"As a supervisor, I want a clear timeline of events with deep links so that I can quickly review what happened and navigate to underlying records."
Description

Present a chronological, mobile-friendly timeline per visit that visualizes the nudge, each corrective action, associated context snapshots, and the resulting EVV record. Provide deep links to the original schedule, visit notes (including voice-to-text artifacts), IoT sensor readings, and caregiver profile. Include filters, hover details, and anomaly flags to accelerate investigation. This interface streamlines QA review and operational troubleshooting while anchoring navigation across CarePulse modules.

Acceptance Criteria
Mobile Chronological Timeline Rendering
Given a visit with at least one nudge, one corrective action, one context snapshot, and a final EVV record When the Audit Timeline is opened on a mobile viewport (≤414px width) Then items render in chronological order by event timestamp (ascending by default) with distinct icons for Nudge, Correction, Context, and EVV And each item displays timestamp (with timezone), actor (user/system), and succinct label text And infinite/continuous scroll preserves order with no visual jumps or reordering And the timeline header shows the visit ID, caregiver name, and scheduled start/end times
Deep Links to Schedule, Notes, IoT, and Caregiver Profile
Given a timeline item with related artifacts When the user taps the Schedule link Then the app navigates to the original schedule detail for that visit in the same workspace and maintains a back path to return to the exact scroll position on the timeline Given a timeline item associated to visit notes (including voice-to-text) When the user taps the Notes link Then the app opens the visit notes view scrolled to the linked artifact segment and highlights the referenced voice-to-text snippet Given a timeline item with IoT sensor context When the user taps the IoT link Then the app opens the sensor readings view filtered to the visit window (±15 minutes) and highlights the linked reading(s) Given a timeline item with a caregiver reference When the user taps the Caregiver link Then the app opens the caregiver profile in-app in read-appropriate mode and preserves a back path to the timeline Given any deep link navigation When the user returns to the timeline Then the timeline restores the prior filters, sort, and scroll offset
Filter, Search, and Sort for Investigation
Given the Audit Timeline for a visit When the user opens Filters Then the user can filter by Event Type (Nudge, Correction, Context, EVV), Anomaly State (All, Flagged, Resolved), Actor (Caregiver, Manager, System), and Time Range And multiple filters can be combined and applied simultaneously Given active filters When the user taps Clear All Then all filters reset to defaults and the full timeline is shown Given the default sort (time ascending) When the user toggles sort Then the timeline reorders to time descending and reflects this in the UI Given the search box When a caregiver name, visit ID, or event ID is entered Then matching timeline items are highlighted and non-matching items are de-emphasized
Hover/Tap Details with Context Snapshots
Given a timeline item When the user long-presses on mobile or hovers on desktop Then a details panel appears showing: full timestamp with timezone, GPS coordinates with reverse-geocoded place name, device type, actor identity, original value → corrected value (if applicable), and reason/comment And any attachments (e.g., photo, audio clip) show as tappable chips that open in-app viewers And the details panel closes on tap outside, ESC, or swipe-down on mobile And the details panel is accessible (focus trap, screen-reader labels, keyboard operable)
Anomaly Flagging and Resolution Indicators
Given the system detects an anomaly (e.g., late arrival > X minutes, geofence mismatch > Y meters, missing note, EVV edit outside policy) When the timeline renders Then the affected items display a visible anomaly badge with severity (Info/Warning/Error) and a concise explanation available on hover/tap Given a flagged anomaly that is later corrected When the correction is recorded Then the original flagged item shows a Resolved state with resolution timestamp and a link to the correcting action And the anomaly no longer appears in the Flagged filter unless filtering by Resolved Given the timeline header When anomalies exist for the visit Then a badge shows total flagged and resolved counts (e.g., 2 flagged / 3 resolved)
Final EVV Linkage and Consistency Checks
Given a visit with a finalized EVV record When the Audit Timeline is viewed Then the final EVV record appears as the terminal node and displays check-in/out times, location status, and payer-required identifiers And tapping the EVV node navigates to the EVV detail view for the same visit Given corrections that affect EVV fields When the EVV is finalized Then the EVV node reflects the latest corrected values and provides links back to the originating corrective actions Given any nudge or correction on the timeline When consistency checks run Then every item is linked to either a subsequent correction or a disposition note, and no orphaned nudges remain
QA Review & Attestation Workflow
"As a QA reviewer, I want to annotate and approve evidence trails with an e-signature so that our agency has a formal attestation for audits."
Description

Enable QA users to annotate evidence chains, request rework from caregivers, and approve finalized traces with e-sign attestation. Provide status transitions, SLA timers, notifications, and audit notes that are incorporated into exports. Track who approved what and when, ensuring a formalized corrective-action process that is consistent, measurable, and audit-ready.

Acceptance Criteria
QA Adds Annotations to Evidence Chain
Given a QA user with annotate permission is viewing an Audit Trace When the QA user adds an annotation linked to a specific evidence item Then the annotation is saved with author ID, role, timestamp, and a reference to the linked evidence And the annotation appears in the trace timeline in chronological order And the annotation supports optional file or voice note attachments up to the configured size limit And if an annotation is edited, the previous content is preserved in version history with editor, timestamp, and reason And deleted annotations are soft-deleted and remain visible in history in exports
Rework Request Initiation and Resolution
Given an Audit Trace in Under Review status When a QA user submits a rework request with instructions and a due date/time Then the trace status changes to Rework Requested and the due date/time is recorded And an SLA timer starts counting down to the due date/time And the assigned caregiver receives a rework notification containing the trace ID, due date/time, and instructions When the caregiver submits corrected documentation Then the QA user can mark the rework as Resolved or request additional rework And upon resolution, the trace returns to Under Review and the SLA timer stops And all rework actions are logged with actor, timestamp, and notes
E‑Sign Attestation on Approval
Given all mandatory fields are complete and any open rework items are resolved When a QA user approves the trace and completes e‑sign attestation Then the system captures the approver’s full name, unique user ID, timestamp, and device/browser fingerprint And the trace status changes to Approved and the evidence chain becomes read‑only And the attestation block (approver, timestamp, statement) is displayed in the UI and included in exports And attempting to modify the evidence chain after approval is blocked with an authorization error logged
SLA Timer, Breach, and Escalation
Given organization SLA thresholds are configured for Under Review and Rework Requested states When a trace enters Rework Requested Then an SLA countdown timer is displayed in the UI and tracked server‑side And if the timer reaches zero before the rework is resolved, the trace is marked SLA Breached And an escalation notification is sent to the QA manager and secondary escalation contacts And SLA start, stop, pause (if applicable), and breach events are logged and included in exports
State Change Notifications
Given notification preferences are configured for QA users, caregivers, and managers When a trace changes state (Under Review, Rework Requested, Approved, Rejected) Then in‑app notifications are created and email/push notifications are sent according to user preferences within 1 minute And notifications include trace ID, current state, responsible party, and any SLA due date/time And duplicate notifications for the same event and recipient are suppressed within a 10‑minute window And notification delivery success/failure is logged and visible in the trace history
Audit Notes and History in One‑Click Exports
Given an Audit Trace with annotations, rework history, status transitions, and approval When a user with export permission clicks Export Then the generated export includes: evidence items, annotations with versions, rework requests/responses, status changes with timestamps and actors, SLA metrics, attestation details, and the final EVV record with time/location context And the export produces both a human‑readable PDF and a machine‑readable CSV/JSON bundle And the export package includes a checksum to verify integrity And the export action is logged with exporter ID, timestamp, and format
Immutable Approval Trail and Access Control
Given role‑based access controls are configured When a trace is Approved Then only authorized roles can access the Approved trace details And the approval trail displays who approved what and when, including any prior reviewers and their decisions And any attempt to alter approval records is prevented and recorded as a security event And view/export access to the approval trail is logged with user ID and timestamp
PII Redaction and Access Controls
"As a compliance officer, I want exports to automatically redact sensitive data based on audience so that we share only what is necessary and remain compliant."
Description

Apply role- and attribute-based access controls and minimum-necessary redaction to evidence views and exports. Mask PHI/PII fields per payer template and jurisdiction, enforce watermarks and viewer identity on exports, and log all access. Support just-in-time access grants and enforce the same policies on mobile, including offline scenarios. This reduces data exposure risk and ensures HIPAA and state-level compliance while preserving evidentiary value.

Acceptance Criteria
Role-Based Evidence View Access Control
Given a signed-in user without Evidence.View permission When they attempt to open an Audit Trace evidence view Then the system returns 403 (web/API) or blocks with "Insufficient permissions" (mobile), transmits no PHI/PII, and writes an audit log entry outcome=Denied. Given a signed-in user with Evidence.View permission within their organization scope When they open an evidence view for a visit within scope Then the system returns 200, renders the view, and prevents querying resources outside branch/agency scope (requests return 404/Forbidden and are logged).
Attribute-Based Evidence Scope (Payer, Jurisdiction, Assignment)
Given a user with Evidence.View permission When the visit’s payer/jurisdiction is not in the user’s authorized attributes Then access is denied (403) and the attempt is logged with attributes mismatched. Given a user authorized for payer/jurisdiction but not assigned to the patient/care team and policy requires assignment When they open the evidence Then the system renders a Limited view showing only non-sensitive EVV metadata (timestamps, geocoordinates, correction type) while suppressing PHI/PII fields and attachments with a "Restricted by policy" placeholder. Given a supervisor with override attribute policy When they open the evidence Then full view is permitted within org scope and is logged with reason=Override.
Minimum-Necessary Redaction per Payer/Jurisdiction Template
Given payer=P and state=S with an active redaction template version T When a user views or exports evidence Then all template-flagged PHI/PII fields are masked according to rule (full/partial), including within text, audio transcripts, image metadata/EXIF, filenames, and structured fields; non-sensitive EVV metadata remains visible; the UI/export shows "Redaction: P/S vT". Given no template is configured for payer/state When a user views or exports evidence Then the default Strict template is applied, the event is logged with severity=Warn, and no PHI/PII patterns (SSN, DOB, phone, email) appear in the output as validated by automated checks. Given a template update When evidence is re-rendered Then caches and search indexes reflect the new masking within 60 seconds.
Watermark and Viewer Identity on Exports
Given an export (PDF/CSV/ZIP) is generated When the file is created Then each page/file contains a visible diagonal watermark "CarePulse Confidential • [Org] • [Viewer Name • UserID] • [UTC Timestamp] • [Payer] • [Jurisdiction]" and a document hash is embedded and displayed; the export request requires a selectable reason which is stamped into the file and audit log. Given an attempt to export without a reason When the user submits the request Then the export is blocked with validation error and the attempt is logged outcome=Denied.
Comprehensive Evidence Access Logging
Given any evidence view, export, or denied attempt When the action occurs Then an immutable audit entry is written within 1 second containing: event type, user id, role(s), org id, resource id (visit/evidence), payer, jurisdiction, redaction template id/version, outcome (Success/Denied), reason (if provided), timestamp (UTC), client (web/mobile/API), device id, IP, geo (approx), export doc hash (if any), and JIT grant id (if any); entries are tamper-evident and retained ≥6 years; admin users can query and export logs by filters.
Just-in-Time (JIT) Access Grants
Given a user lacks access to a requested evidence item When they submit a JIT request with reason Then a designated approver can approve with scope (patients/visits/payer/jurisdiction), redaction tier, and expiry (max 24h); upon approval, access is effective within 10 seconds and only within the approved scope. Given an active JIT grant When the expiry time is reached or the grant is revoked Then subsequent access attempts are denied within 5 minutes (or sooner on next API call), the session is forced to re-validate, and all events are logged linking to the JIT grant. Given a JIT request is denied When the requester attempts access Then access remains denied and the denial reason is shown and logged.
Mobile Parity and Offline Enforcement
Given the mobile app is online When viewing or exporting evidence Then the same access policies, redaction templates, watermarks, and audit logging as web are applied. Given the mobile app is offline with cached policies/templates updated within 24 hours and a valid session When viewing or exporting evidence Then access is allowed with the cached policies, exports include watermark and viewer identity, and all actions are queued in encrypted local storage and synced with server audit logs within 10 minutes of reconnection for re-validation. Given the mobile app is offline without valid cached policies/templates or with an expired JIT grant When viewing or exporting evidence Then access is blocked with an explanatory message and a local denied event is queued and later synced.

Portal Presets

One-click, payer-tailored export templates that stay in sync with each portal’s quirks—codes, modifiers, unit rounding, EVV placement, file naming, and column order. Presets are versioned and auto-updated, so Billing and Compliance can export correctly the first time without memorizing rule changes or reformatting files.

Requirements

Payer Preset Library & Versioning
"As a billing coordinator, I want to select a payer preset that encapsulates all portal quirks so that my exports are compliant on the first attempt."
Description

Centralized repository of payer-specific export templates that encode portal quirks—billing codes, modifiers, unit rounding rules, EVV data placement (header vs. line-level), file naming patterns, and column order. Each preset is versioned with effective dates and changelogs, supports deprecation and migration notes, and can be pinned per payer/client. Integrates with CarePulse’s export pipeline to apply the selected preset at runtime, ensuring consistent, compliant output across CSV/XLSX and other supported formats.

Acceptance Criteria
Preset Selection and Pinning per Payer/Client
Given the preset library contains multiple payers and versions, when a user filters by payer name or ID, then only matching presets are listed with payer, version, and effective dates. Given a user selects a preset version, when details are viewed, then the system displays EVV placement, unit rounding rule, file naming pattern, and column order. Given a Billing Admin pins preset P v2.3 to Client A, when any export is run for Client A, then preset P v2.3 is applied regardless of newer versions. Given no preset is pinned for Client B, when an export is initiated for payer P on date D, then the version whose effective dates cover D is auto-selected. Given a pin/unpin action is saved, when settings are reloaded, then the pin state persists and the change is audit-logged with user, timestamp, and previous value.
Version Resolution by Effective Date and Pin Overrides
Given multiple versions exist with effective_date_start and effective_date_end, when evaluating on datetime T in the agency timezone, then the version where start <= T < end is selected. Given versions have overlapping date ranges, when an export is started, then the system resolves to the version with the latest effective_date_start and records a warning in the audit log. Given no version covers datetime T and no pin exists, when an export is started, then the export is blocked with a clear error listing available version date ranges and a link to pin a version. Given a pin exists, when the effective window would otherwise select a different version, then the pinned version takes precedence. Given a pin references a non-existent or deprecated-removed version, when an export is started, then the export is blocked with an error and a prompt to select a valid version.
Preset-Driven Export Formatting Across CSV and XLSX
Given a preset defines codes, modifiers, rounding=nearest_15_min_round_half_up, EVV=header, file_naming="{payer}_{yyyyMMdd}_{runId}.csv", and a specific column order, when exporting to CSV, then the file contains the defined columns in the exact order, units are rounded per rule, EVV fields are present only in the header row, and the filename matches the pattern. Given the same preset, when exporting to XLSX, then the worksheet contains the same columns in the same order and values as CSV, and EVV fields are present in the designated header cells only. Given modifiers are absent for a line where allowed, when exporting, then the modifier columns are emitted as empty strings and no placeholder values are inserted. Given the preset specifies EVV=line_level, when exporting, then EVV fields appear on each line and not in the header. Given the preset specifies a delimiter and quoting policy for CSV, when exporting, then fields with commas or quotes are correctly escaped per policy and a schema validator passes with zero critical errors.
Changelog Display and Auto-Update Behavior
Given a new version v2.4 for payer P becomes effective at T and Client B is not pinned, when an export occurs at or after T, then v2.4 is applied automatically and recorded in the export audit. Given two consecutive versions v2.3 and v2.4 exist, when a user opens the changelog, then the UI displays version notes and a diff of changed rules (codes, rounding, EVV placement, columns, file naming). Given clients affected by an upcoming version within 7 days and not pinned, when they visit the export screen, then they see an in-app notice indicating the scheduled switch with the effective timestamp. Given Client A is pinned to v2.3, when v2.4 becomes effective, then exports for Client A continue to use v2.3 and a non-blocking banner suggests reviewing and updating the pin. Given an auto-update applies a new version, when the export completes, then the run metadata stores the applied version and the prior auto-selected version for traceability.
Deprecation Rules and Migration Notes
Given a version is marked Deprecated with deprecation_date and grace_period_end, when a user attempts to newly pin it on or after deprecation_date, then the pin is blocked with an error and migration notes are shown. Given exports use a deprecated version during the grace period, when the export completes, then the export succeeds but includes a warning in the run report and audit log referencing the migration notes. Given the grace_period_end has passed, when an export attempts to use the deprecated version (unpinned auto-select or existing pin), then the export is blocked unless the user has an Override role, in which case it proceeds with a mandatory justification captured. Given a deprecated version has migration notes with target version guidance, when viewing the version, then the UI displays the notes and a one-click action to pin the recommended version.
Validation and Error Handling on Export
Given a preset is saved, when required fields (codes, rounding rule, EVV placement, file naming, column order) are missing or invalid, then the save is rejected with field-level errors and no changes are persisted. Given an export runs with a selected preset, when a visit line contains an unknown billing code for that payer, then the export fails with a descriptive error listing line numbers and offending codes. Given EVV placement is header but line-level EVV data is detected in the template mapping, when exporting, then the export is blocked with a conflict error and remediation guidance. Given an export encounters a writer error (disk, permission, serializer), when the job terminates, then no partial files are left in the download location and the job status is Failed with a correlation ID. Given warnings (e.g., optional modifier missing) occur, when the export completes, then the file is produced, and warnings are shown in the run summary without downgrading status from Success.
Export Run Auditability and Traceability
Given an export completes, when viewing its audit record, then it shows payer, client, preset_id, preset_version, effective date resolution, whether pin was used, and a SHA-256 hash of the resolved preset config. Given a user needs to verify historical behavior, when opening an export run, then they can download the resolved preset JSON that was applied for that run. Given auditors filter by payer, client, date range, or version, when searching export history, then matching runs are returned with pagination and exportable as CSV within 2 seconds for up to 5,000 records. Given retention policy is 24 months, when runs exceed that age, then records are archived and still discoverable by filter with a clear Archived state and accessible metadata. Given an export fails, when viewing the audit record, then the error message, stack/correlation ID, and validation findings are present for root-cause analysis.
Auto-Update Rules Sync
"As a compliance manager, I want presets to auto-update when payer rules change so that we remain compliant without manual rework."
Description

Background sync service and admin workflow that detect and apply payer rule changes to presets, updating codes, modifiers, rounding thresholds, EVV placement, filename schemas, and column layouts. Supports safe rollout with notifications, preview diffs, sandbox validation, effective-date scheduling, and automatic fallback to pinned versions when needed. Monitors sync health and logs changes for auditability without disrupting in-progress billing cycles.

Acceptance Criteria
Background Sync Applies Payer Rule Changes
Given a payer preset with auto-update enabled and an upstream rules change is detected When the background sync executes Then a new preset version is created with an incremented version identifier And the version reflects the updated codes, modifiers, unit rounding thresholds, EVV placement, filename schema, and column order exactly as specified And the new version is created in a non-live state pending validation or approval And currently live and pinned versions remain unchanged for exports
Admin Diff Preview and Approval
Given a new candidate preset version exists When an admin opens the update in the Portal Presets admin Then a human-readable diff displays additions, removals, and changes across codes, modifiers, rounding rules, EVV placement, filename schema, and column order When the admin selects Approve and sets an effective date and time Then the version status changes to Scheduled and a notification is sent to subscribed roles When the admin selects Reject Then the version is voided and will not be promoted
Sandbox Validation Prior to Go-Live
Given a candidate version is Scheduled When sandbox validation runs Then sample exports are generated for representative visits or claims and checked against payer validation rules And all validations meet or exceed the configured pass criteria If any validation fails Then the version is marked Failed Validation, promotion is blocked, and subscribers are notified with failure details
Effective-Date Scheduling and Version Pinning
Given an effective date and time is set for version V_new When current time is before the effective date Then all exports continue using the currently pinned version V_current When the effective date and time is reached and validation has passed Then new exports automatically use V_new And billing cycles that started before the effective date continue using V_current until those cycles are closed
Automatic Fallback on Post-Go-Live Failures
Given version V_new is live When export error rates or portal rejections attributable to preset rules exceed the configured threshold within the monitoring window Then the system automatically reverts affected exports to the last known good version V_previous and marks V_new as Suspect And admins are alerted with context, and subsequent exports use V_previous until a new version is approved
Non-Disruptive Exports During Sync and Updates
Given background sync or version promotion is in progress When a user initiates an export Then the export locks to a single preset version for its entire run And no mid-job preset changes are applied And the export completes successfully or fails atomically without mixing outputs from different versions
Audit Logging and Sync Health Monitoring
Given any sync, approval, scheduling, promotion, fallback, or rejection event occurs When audit logs are queried Then immutable entries exist with timestamp, actor or system, payer identifier, affected preset IDs, from and to version IDs, and a diff summary Given the sync service operates continuously When the heartbeat is delayed beyond the configured interval or error rate exceeds thresholds Then a health alert is issued to subscribed roles and the status dashboard reflects Degraded along with last success time and error summary
One-Click Export & Preflight Validation
"As a billing specialist, I want a one-click export with preflight checks so that I can fix issues before uploading to the portal."
Description

Single-action export that applies the selected payer preset and runs preflight validation against required fields, code/modifier compatibility, unit rounding outcomes, EVV token presence/placement, filename conformance, and column ordering. Returns actionable errors and warnings with record-level detail and quick-fix suggestions, supports batch exports by date/payer/location, and outputs in portal-acceptable formats to maximize first-pass acceptance.

Acceptance Criteria
One-Click Export Applies Selected Payer Preset
Given a user selects a payer preset and a set of visits on the Export screen When the user clicks "Export" Then the system applies the selected preset's rules for codes, modifiers, unit rounding, EVV placement, filename pattern, and column order And a preflight validation panel is displayed without requiring any additional mapping or configuration steps And no more than one user action (the Export click) is required to initiate the process
Preflight Validation Blocks Errors and Allows Warnings
Given selected visits include records with missing required fields or invalid rule combinations When preflight validation runs Then blocking errors are listed with counts and export is disabled until resolved And non-blocking warnings are listed with counts and export remains enabled behind an explicit user confirmation And validation displays total errors and warnings, grouped by rule, within 5 seconds for up to 5,000 visit records
Actionable Record-Level Errors with Quick-Fix
Given preflight detects issues When the user views validation results Then each issue includes record identifier (Visit ID), patient name, date of service, field, violated rule, and a human-readable reason And where a deterministic correction exists, a Quick-Fix action is available that opens an inline edit or targeted modal to resolve the issue And after a fix is applied, the system auto re-validates the affected record and updates counts without a full page refresh
Unit Rounding and Code/Modifier Compatibility Enforcement
Given a payer preset defines unit rounding increments and a code–modifier compatibility matrix When preflight validation runs Then rounded units are computed per preset rules and shown alongside original units in the preview And any incompatible code–modifier combinations are flagged as blocking errors with the specific rule reference And the generated export uses rounded units per payer rules, not raw duration
EVV Token Presence, Format, and Placement Verification
Given the payer preset specifies EVV requirements When preflight validation runs Then each visit that requires EVV has a token present in the correct format and the export preview shows it placed in the preset-defined location And visits not requiring EVV per payer/service rules are not flagged for missing tokens And missing or malformed tokens are flagged as blocking errors with a direct link to capture or attach the token
Filename Pattern, Encoding, and Column Order Conformance
Given a payer preset defines filename pattern, encoding, delimiter, headers, and column order When the file is generated Then the filename exactly matches the pattern (including case, zero-padding, and separators) and is unique for the batch And the file encoding, line endings, and delimiter match the preset And columns appear in the exact order with exact header labels defined by the preset, with no extra or missing columns
Batch Export by Date/Payer/Location with Portal-Compatible Outputs
Given the user applies filters for date range, payer(s), and location(s) When the user clicks "Export" Then only records matching all filters are included without duplicates And files are grouped and generated per preset configuration (e.g., one file per payer or per payer-location) And each file is produced in the portal-acceptable format defined by its preset (e.g., CSV, XLSX, JSON) And the batch job shows per-file record counts and validation status, and completes within 60 seconds for up to 10,000 visit records or streams progress with incremental file availability
EVV Placement & Code/Modifier Mapping
"As an operations manager, I want EVV data and codes to map correctly per payer so that submissions are accepted without manual edits."
Description

Configurable mapping layer that translates CarePulse service items into payer-specific billing codes and modifiers, applies payer-defined unit rounding (e.g., 7/8 rounding, minimum thresholds), and injects EVV identifiers at the correct hierarchy per payer. Pulls EVV data from visits and routes, validates completeness, and formats values according to portal schemas to eliminate manual data editing.

Acceptance Criteria
Map Service Items to Payer Codes and Modifiers
Given an active payer preset with mapping rules When a user generates a billing export for that payer Then each CarePulse service item in the export is mapped to the payer’s billing code and up to four modifiers as defined by the rules And conditional mappings based on visit attributes (e.g., discipline, program, location, telehealth flag) are applied as configured And if any service item lacks a mapping, the export is blocked and an error report lists unmapped items by payer, service, and attribute set And the export metadata includes the preset name and version identifier used for mapping
Inject EVV Identifiers at Payer-Specific Hierarchy
Given a payer preset that specifies EVV placement (header-level, line-level, or both) and field names When a user generates a billing export for that payer Then EVV identifiers (e.g., VisitKey, StaffID, ClientID, DeviceID) are inserted exactly in the configured columns/segments and order And header-level EVV batch identifiers are populated when required; otherwise, they are omitted And line-level EVV identifiers are populated per visit line when required And values are formatted per schema (e.g., zero-padded lengths, uppercase, timezone-normalized timestamps) And if a required EVV field is missing, the export fails with a validation error listing visit IDs and missing fields
Apply Payer-Defined Unit Rounding Rules
Given a payer preset with unit calculation settings (base unit length, rounding algorithm, minimum billable threshold, and suppress-below-threshold behavior) When units are computed for each billable line during export Then the rounding algorithm is applied exactly as configured (e.g., 7/8 rounding for 15-minute units) And visits below the minimum threshold are set to zero units or suppressed per configuration, with those visits listed in a suppression report And non-billable time segments (e.g., travel, on-hold) are excluded prior to rounding when configured And example verification for 7/8 rounding with 15-minute units: 1–7 minutes → 0 units; 8–22 minutes → 1 unit; 23–37 minutes → 2 units
Validate EVV Data Completeness Before Export
Given visits and routes with captured EVV data When a user initiates export for a payer with defined required EVV fields and tolerances Then the system validates EVV completeness and consistency per payer rules (e.g., clock-in before clock-out, GPS within radius, required attestations present) And any critical validation failure blocks export and produces a downloadable validation report listing visit IDs and failed rules And a successful export includes a validation summary with counts of visits checked and zero critical failures
Conform to Payer Portal File Schema and Naming
Given a payer preset with file schema configuration (column order, headers, delimiter, quoting, encoding, date/time formats, and filename pattern) When an export file is generated Then the file name matches the exact pattern including required tokens (payer code, date stamp, preset version, batch ID when applicable) And the file’s column order, headers, delimiter, quoting, and encoding match the schema exactly And date/time fields are formatted per schema (including timezone and offset rules) and numeric fields respect required precision and padding And field length and character set constraints are enforced; records violating hard constraints are rejected with a clear error, and any configured truncations are logged in the audit trail
Use Latest Active Preset Version and Audit Stamp
Given multiple versions of a payer preset where one is marked Active When a user generates an export without overriding the version Then the latest Active version is applied to mapping, rounding, and EVV placement And the export and audit log include the applied preset version ID, rule effective dates, and a checksum of the ruleset And if the Active version changes between export initiation and generation, the user is prompted to confirm re-running with the new version or proceed with the previously locked version, and their selection is recorded
Agency-Level Overrides & Permissions
"As an account admin, I want to apply controlled overrides to a payer preset so that we can meet local payer quirks while staying compliant."
Description

Controlled override framework allowing agencies to tailor aspects of a preset—such as filename suffixes, column suppression/reordering within allowed bounds, rounding edge-case handling, and default modifier application—without breaking core compliance rules. Includes role-based access control, approval workflow, change diffs, revert-to-default, inheritance for multi-location orgs, and an audit trail of who changed what and when.

Acceptance Criteria
RBAC for Preset Overrides
Given a user with role Agency Admin or Billing Manager opens a payer preset, When they click "Propose Override", Then the override editor opens. Given a user without override permission (e.g., Caregiver) attempts to access the override editor via UI or URL, When the request is made, Then access is denied with HTTP 403 and no draft is created. Given a draft override created by User A, When User A views approval options, Then approval is disabled for that user due to separation-of-duties. Given a draft override pending approval, When a user with Compliance Officer role approves it, Then the override status changes to Active and becomes effective for subsequent exports. Given a draft override pending approval, When it has fewer than one approver from an allowed role, Then it cannot transition to Active and the UI indicates the missing approver requirement. Given role assignments are updated, When permissions are refreshed, Then only users with current authorized roles can create, edit, or approve overrides.
Override Bounds Enforcement
Given CarePulse defines allowed override bounds for a payer (e.g., reorder among allowed columns, suppress optional columns only), When a user attempts to add a non-allowed column or remove a required one, Then validation blocks the save and lists each violation. Given EVV column placement is fixed by payer rules, When a user tries to move EVV columns outside allowed positions, Then the change is blocked with a message specifying the required positions. Given rounding policy options are limited to an approved set, When a user selects an unapproved formula, Then the option is rejected on save and the previous policy remains. Given a valid column reorder within allowed bounds, When the change is saved and an export is generated, Then the file reflects the new order and passes built-in compliance checks with zero errors. Given column suppression rules allow only marked optional fields, When a user suppresses an optional field, Then the export excludes that column while preserving required columns and correct data alignment.
Approval Workflow with Diffs and Versioning
Given a user submits an override change, When the draft is created, Then a diff view shows before/after values for each field changed. Given a pending draft, When an approver approves with a comment, Then the draft becomes Active Version N+1, the comment is recorded, and Version N remains available for rollback. Given a pending draft, When an approver rejects with a reason, Then the draft status becomes Rejected and no changes are applied to the Active configuration. Given an Active version, When a user with permission selects "Revert to Default" and the action is approved, Then Version N+1 is created mirroring the system default and becomes Active upon approval. Given a request to revert to a previous version M, When the revert is approved, Then Version M becomes Active and its checksum matches the stored historical record. Given a draft exists, When notifications are configured, Then approvers receive a notification on draft creation and the requester receives a notification on decision.
Multi-Location Inheritance
Given a parent agency with locations L1 and L2, When an override is activated at the parent level, Then L1 and L2 inherit the override and exports from both use the parent settings. Given L1 creates a local override for filename suffix only, When the parent later changes column order, Then L1 keeps its local filename suffix while inheriting the parent's column order. Given a child location has a local override for a field, When the parent changes that same field, Then a conflict indicator appears and the child’s local setting remains effective until reset to inherit. Given a user chooses "Reset to Inherit" at a child location, When confirmed, Then all local fields revert to inherited values and subsequent exports reflect the parent configuration. Given a location scope is selected on export, When the export runs, Then the effective configuration is resolved by location precedence (local override over parent, else parent over default).
Default Modifier Application & Exceptions
Given a payer and procedure mapping defines default modifiers, When visits matching the mapping are exported, Then the modifiers auto-populate in the correct column positions. Given a visit meets an approved exception rule, When the exception is flagged, Then default modifiers are suppressed and the export remains compliant. Given a user attempts to suppress a required modifier without an allowed exception, When saving or exporting, Then the system blocks the export and shows the specific compliance rule violated. Given default modifiers are updated via an approved override, When the next export runs, Then the new modifiers appear and the audit trail links the export to the override version. Given conflicting modifier rules exist at parent and child levels, When a child override is active for the same mapping, Then the child rule takes precedence for that location only.
Audit Trail & Reporting
Given any override action (create, edit, approve, reject, revert), When the action occurs, Then an immutable audit record captures user, role, timestamp (UTC), scope (parent/child), fields changed, before/after values, and justification text. Given multiple audit records exist, When a user filters by date range, actor, location, payer, or status, Then only matching records display and counts update accordingly. Given an auditor requests evidence, When a user exports the audit log to CSV, Then the file contains the visible columns, honors current filters, and downloads within 5 seconds for up to 10,000 rows. Given data retention policies, When viewing or exporting audit logs, Then records are read-only and cannot be altered or deleted via the UI. Given an export is generated, When viewing its metadata, Then the export shows the override version ID used and links to the corresponding audit record.
Rounding Edge-Case Configuration
Given the payer allows rounding to 15-minute units with round-half-up, When a visit duration is 7.49 minutes, Then the units round down to 0; When 7.50 minutes, Then units round up to 1. Given the agency selects round-half-even within the allowed set, When exporting boundary durations (e.g., 7.5, 22.5 minutes), Then rounding follows half-even rules and matches predefined test cases. Given a daily unit cap from the payer, When rounding would exceed the cap, Then units are capped at the payer limit and a warning is logged on the export job. Given an invalid rounding option outside the allowed set is selected, When saving the override, Then validation fails with a clear error and the previous policy remains in effect. Given a valid rounding policy is approved, When exports run across time zones, Then rounding behavior is consistent and independent of client time zone settings.
Audit-Ready Export Logging & Traceability
"As a compliance officer, I want detailed export logs tied to preset versions so that I can prove what was sent and under which rules."
Description

Immutable logging for each export capturing preset name and version, export parameters, user, timestamp, file checksums, generated filenames, validation results, and optional portal submission receipts. Provides search, filters by payer/date/user, and export of logs for audits. Enforces retention policies and ensures every report is traceable back to the exact ruleset used at generation time.

Acceptance Criteria
Immutable Export Log Entry Creation
Given a signed-in user with export permissions uses a Portal Preset to export a payer file When the export job completes successfully Then the system creates exactly one new immutable export log entry containing at minimum: - export_id (UUID) - preset_name and preset_version (string) - export_parameters JSON (including payer, date_range, filters) - initiated_by user_id and role - completed_at timestamp in ISO 8601 UTC - generated_filenames (array) - file_checksums (SHA-256 per file) - validation_result (pass/fail) and validation_messages - environment (prod/sandbox) - portal_submission_receipt = null And the log entry stores a cryptographic integrity hash of its payload and cannot be updated after creation.
Tamper Prevention and Audit Trail
Given any user (including admins) attempts to modify or delete an export log entry via UI or API When the request is made to change any logged field or remove the entry Then the system rejects the request with HTTP 403/appropriate UI error and no data is changed And an audit event is recorded with actor, timestamp, action attempted, target export_id, and outcome = blocked And only the automated retention purge process is permitted to delete export logs.
Ruleset Version Traceability
Given a Portal Preset has advanced from version N to version N+1 after an export was logged with version N When a user views the prior export’s log details or retrieves it via API Then the response includes preset_name and preset_version = N and a preset_ruleset_checksum (SHA-256) that uniquely identifies the ruleset used at generation time And these fields are read-only and remain unchanged regardless of later preset updates.
Search and Filter by Payer/Date/User
Given a tenant with at least 10,000 export logs When a user filters by payer, a date range, and initiating user simultaneously and submits the search Then only matching logs are returned, sorted by completed_at descending, with accurate counts And the first page (50 items) loads within 2 seconds under normal load And pagination, empty-state messaging for zero results, and CSV download of the current result set are available.
Audit Log Export (CSV/JSON)
Given a user has a filtered set of export logs (1–5,000 records) When the user exports logs as CSV and then as JSON Then each file contains all required fields: export_id, preset_name, preset_version, payer, date_range, initiated_by, completed_at (UTC), generated_filenames, file_checksums, validation_result, receipt_id (if any), retention_expiry And the filename follows pattern audit-logs_YYYYMMDDTHHMMSSZ_<tenant>.{csv|json} And the system displays a SHA-256 checksum for the exported file and the row/count matches the selected records.
Retention Policy and Legal Hold Enforcement
Given a tenant retention policy of 7 years is configured and nightly purge is enabled When a log’s retention_expiry timestamp passes and no legal hold is set Then the purge job removes the log entry and any attached receipts and records a purge audit event (export_id, timestamp, actor=system) And logs under legal hold are not deleted until the hold is removed, after which they are purged on the next cycle And all retention and purge timestamps are stored and displayed in UTC.
Portal Submission Receipt Capture and Linking
Given an export log exists without a portal receipt When a billing user attaches a portal receipt (PDF up to 10 MB) or enters a receipt ID and submission timestamp Then the system appends a receipt record linked to the export_id, captures filename and SHA-256 checksum (if a file), stores submitted_by and submitted_at (UTC), and marks the log’s receipt_id field accordingly And the original export log remains immutable; only the new receipt record is added And receipts are searchable by receipt_id and included in audit log exports.

Denial Shield

A preflight validator that runs payer-specific checks before you export, catching missing auth numbers, unit mismatches, modifier order, visit overlaps, and EVV window gaps. It explains why each issue fails and offers one-tap fixes or safe defaults, boosting clean-claim rates and slashing avoidable rejections.

Requirements

Payer Rules Library & Versioning
"As a billing manager, I want a centralized, versioned library of payer rules so that claims are validated against the latest requirements without code changes."
Description

A centralized, versioned repository of payer-specific validation rules (e.g., required auth numbers, unit caps, modifier order, diagnosis pairings, place-of-service constraints, EVV tolerance windows, overlap policies) that Denial Shield uses to evaluate claims prior to export. Supports rule scoping by payer, plan, state, and effective dates with rollback, draft/publish workflow, and change history. Includes a rule editor with validation, import/export (JSON/CSV), and a test harness with sample claims to verify rule behavior before publishing. Integrates with CarePulse patient profiles, visit notes, and export modules to pull necessary data fields at validation time.

Acceptance Criteria
Rule Scoping by Payer, Plan, State, Effective Dates
Given a new rule with scope {payer A, plan B, state C, effective start S, effective end E} When it is saved as Draft Then it is stored with those attributes and excluded from runtime evaluations. Given two Published rules with different scopes When validating a claim that matches one scope Then only the matching rule(s) are evaluated and non-matching rules are not applied. Given a publish attempt that would create overlapping effective date ranges for the same rule identity and identical scope When publishing Then the publish is blocked with a validation error describing the overlapping window.
Version Selection by Service Date and Published Status
Given multiple versions of a rule with distinct effective date ranges and Published status When validating a claim with service date D Then the version whose effective window includes D is used exclusively. Given no Published version covers service date D When validating Then the validator records a machine-readable warning and does not apply that rule. Given a claim requires fields from CarePulse patient profiles, visit notes, and export modules When validating Then the validator reads those fields in a read-only manner; if a required field is missing, it emits a specific error naming the field and does not mark the rule as passed.
Draft/Publish Workflow and Approvals
Given a rule in Draft When Publish is requested Then the system requires a change summary, passes schema/reference checks, and all attached test harness cases pass before allowing publish. When a rule is Published Then it becomes available to Denial Shield evaluations within 60 seconds and Draft versions remain excluded from runtime. Given a user without publish permission When they attempt to Publish Then the action is denied with an authorization error.
Change History and One-Click Rollback
When any rule version is created, edited, published, or rolled back Then an immutable audit record is stored with actor, timestamp, action, previous/new values, and change summary. Given a previously Published version V When a user selects Rollback to V Then a new Published version V' identical in content to V is created with a new version number, V' becomes active immediately, and the previously active version is marked as superseded; all events appear in history.
Rule Editor Validation and Data Dictionary Checks
Given a rule definition When saving or publishing Then schema validation runs and fails on missing required fields, unknown operators, or malformed expressions with line/column pointers. Given field references in a rule When validating Then each field is checked against the CarePulse data dictionary; unknown or deprecated fields block publish with a clear message and suggested alternatives. Given non-blocking warnings (e.g., unused variable) When saving as Draft Then the save succeeds with warnings; when publishing, warnings must be acknowledged before Publish can proceed.
Import/Export (JSON/CSV) with Round-Trip Fidelity
When exporting selected rules to JSON and CSV Then the files include scope, logic, metadata, test cases, and version info sufficient for re-import, and PHI is excluded. Given an exported rule file from version X When re-imported Then the resulting Draft(s) are semantically identical to X (excluding system-generated IDs/timestamps), and a diff view shows no logical differences. Given an import that would create overlapping effective dates for the same rule identity and scope When importing Then the import is allowed only as Draft and Publish is blocked until overlaps are resolved.
Test Harness with Sample Claims and Publish Gating
Given one or more sample claims with expected outcomes attached to a rule When the test harness is run Then each outcome is reported as Pass/Fail with evaluated rule messages, and results are stored with timestamps. Given a rule with attached tests When Publish is requested Then Publish is blocked unless 100% of attached tests pass on the target version. Given a failing harness result When the user opens the failure detail Then the UI shows evaluated inputs, matched rule paths, and failure reason to facilitate debugging.
Preflight Validation on Export
"As an operations manager, I want preflight validation to run automatically on export so that avoidable denials are caught and resolved before submission."
Description

When a user initiates a claim export (e.g., 837P/837I, UB-04, CSV), Denial Shield runs a preflight pass that executes the relevant payer rules across all selected visits/claims. The validator categorizes findings by severity (Errors block export; Warnings allow export with notice), supports batch processing, and returns results within defined SLAs (e.g., under 5 seconds for 500 claims) with progress feedback. Users can drill into claim-level details, apply fixes, and re-run validation without leaving the export flow. Clean-claim rate and common failure trends are summarized to guide operational improvements.

Acceptance Criteria
Auto-Triggered Preflight on Export Start
Given a user selects one or more claims and an export format (837P, 837I, UB-04, or CSV) When the user clicks Export Then Denial Shield runs preflight validation on all selected claims before any export file is generated And a progress UI appears within 500 ms showing 0/N processed And no export artifact is created until validation completes with zero Errors across the selected claims (Warnings permitted)
Payer-Specific Rule Application and Versioning
Given a batch contains claims mapped to multiple payers and lines of business When validation runs Then each claim is evaluated only against its mapped payer rule pack and line-of-business variant And the applied rule pack name and version (e.g., "PayerX TX HH v3.4") are stored with the validation result and visible in claim details And if a claim’s payer has no configured rules, the Default rule pack is applied and a Warning "No payer-specific rules configured" is returned
Severity Categorization and Export Gating
Given validation findings exist for the current selection When any selected claim has an Error severity finding Then the Export action is disabled and displays the total count of blocking Errors And claims with only Warnings are eligible for export after the user acknowledges a single confirmation control And if the user deselects all claims with Errors, Export becomes enabled for the remaining selection And selecting "Go to first error" navigates to the first failing claim detail within the export flow
Performance SLA and Progress Feedback
Given a batch of 500 claims with average rule complexity When validation runs under standard load Then 95% of runs complete in under 5 seconds and 99% under 8 seconds (measured server-side per run) And the progress UI updates at least every 250 ms with processed/total counts and ETA after the first second And if a single-claim validation exceeds 2 seconds, the UI indicates "Taking longer than expected" while continuing And if validation does not complete within 30 seconds, the run fails gracefully with a retry option and no export artifact is created
Claim-Level Explanation and One-Tap Fixes
Given a claim has a failing rule (e.g., missing auth number, unit mismatch, modifier order, visit overlap, EVV gap) When the user opens the claim’s validation details Then the UI shows a plain-language explanation, rule identifier, impacted fields, and computed values that triggered the failure And an "Apply Fix" control is available where a safe default or deterministic fix exists; selecting it updates the draft claim and logs the change And an "Undo" control reverts the last applied fix on that claim And "Re-run Validation" revalidates only the edited claim and updates its status without leaving the export flow And successful revalidation removes the finding and updates batch counts in real time
Accuracy of EVV, Overlap, Units, and Modifiers Checks
Given two visits for the same patient and rendering provider overlap by more than 0 minutes on the same date When validation runs Then an Error "Visit overlap" is returned referencing both visit IDs and time ranges Given EVV records start at 09:05 and end at 09:35 and the visit is 09:00–10:00 with a payer tolerance of ±5 minutes When validation runs Then a Warning "EVV gap 25 minutes" is returned referencing the gap duration and tolerance Given billed units differ from payer-calculated units derived from documented time per payer rounding rule When validation runs Then an Error "Unit mismatch" is returned showing billed vs calculated units and the rounding rule applied Given HCPCS modifiers are present but not in payer-required order When validation runs Then an Error "Modifier order" is returned with the required order and a one-tap "Reorder" fix available
Clean-Claim Rate and Trends Summary
Given a validation run completes When the summary view loads Then it displays the clean-claim rate = (claims with zero Errors ÷ total claims) as a percentage rounded to one decimal place And the top 5 failure categories across the last 30 days are listed with counts and percent of total failures, filterable by payer and date range And selecting a category filter updates the list and related claims within 300 ms And a "Download CSV" action exports the current failures list with claim IDs, payer, category, and rule identifiers
One-Tap Fixes & Safe Defaults
"As a biller, I want one-tap fixes for common validation failures so that I can resolve issues quickly without manually editing multiple records."
Description

For each failed check, provide context-aware resolutions that can be applied with a single tap or in bulk, with preview and undo. Examples include inserting modifiers in the correct payer-specific order, allocating units within authorized caps, selecting safe default values when allowed, splitting overlapping visits, and adjusting EVV timestamps within permitted tolerances. Enforce role-based permissions, confirm irreversible changes, and write all modifications back to the appropriate CarePulse records while preserving data integrity and triggering a revalidation.

Acceptance Criteria
One-Tap Modifier Reorder (Payer-Specific Rules)
Given a claim line fails a payer-specific modifier order check and displays a Fix option When the user taps Apply on the Fix preview Then the system updates the modifier sequence to match the payer rule, writes the change to the claim line, creates an audit entry (user, timestamp, before/after, check ID), and triggers automatic revalidation And the original check status changes to Pass after revalidation And an Undo control is available for 10 minutes to revert the change and re-run revalidation And if multiple claim lines have the same failure and the user selects Apply to all similar, the system applies the same correction to all selected lines and reports the count of successful updates and any failures
Bulk Unit Auto-Allocation Within Authorization Caps
Given multiple visits/claim lines are flagged for unit overages against an authorization cap and are selected in bulk When the user taps Auto-allocate in the Fix preview Then the system reallocates units so that no line exceeds the remaining authorized units, preserves integer/decimal precision per payer config, and shows a before/after summary of units and remaining balance And the updates are written to the visit/claim records and authorization utilization ledger in a single atomic transaction; on any error, no records are updated and an error summary is displayed And an audit entry per record is created and automatic revalidation runs for all impacted items And any items that cannot be auto-allocated (e.g., insufficient remaining units) are returned with a clear reason, and are not modified And the user can Undo the bulk operation to restore all impacted records and revalidate again
Safe Default Value Application With Preview and Undo
Given a failed check allows a payer-approved safe default (e.g., place of service, billing note, or indicator) and the Fix preview shows the proposed default and source policy When the user taps Apply Then the system sets the field to the safe default, marks the record with a default-applied flag and reason code, writes the change, and triggers automatic revalidation And the check passes if the payer rule is satisfied And an Undo option is available for 10 minutes to restore the prior value and revalidate And if multiple similar failures are selected, Apply to all similar applies the default to all eligible records and reports successes and skips (with reasons)
One-Tap Split for Overlapping Visits
Given two or more visits for the same caregiver and member overlap and a Split overlap Fix is available When the user taps Apply on the proposed split preview (showing new start/end times and units) Then the system creates non-overlapping visit segments that meet configured minimum duration constraints; if constraints cannot be met, the Fix cannot be applied and a reason is shown And the original visit(s) are updated/superseded, new segments are persisted with linkage to the source visit, and units are recalculated accordingly And an audit trail records the split operation (before/after segments, user, timestamp, check ID) And automatic revalidation runs so that the overlap check passes and any downstream duration/EVV checks are re-evaluated And the user can Undo the split to restore the original visit(s) and revalidate
EVV Timestamp Adjustment Within Allowed Tolerance
Given an EVV-related failure shows clock-in/out outside the payer’s allowed tolerance window and the preview proposes adjusted timestamps within configured limits When the user provides a required edit reason (if configured) and taps Apply Then the EVV events are shifted only within the allowed tolerance, changes are saved to EVV records with the edit reason, and automatic revalidation runs And the check passes if the adjusted times meet the tolerance rule And if the proposed adjustment would exceed the allowed window, Apply is disabled and a clear message explains why And an Undo is available for 10 minutes to restore original timestamps and revalidate
Role-Based Permissions and Irreversible Change Confirmation
Given role-based permissions are configured for each Fix type (e.g., EVV edits, visit splits, unit allocation) When a user attempts a Fix that requires a permission they lack Then the Fix action is blocked, Apply is disabled, and an explanatory message indicates the required role/permission And when a permitted user executes a Fix that has irreversible effects (e.g., creates new visit segments) Then a confirmation dialog summarizes impacted records and irreversible aspects and requires explicit confirmation before proceeding And all permission checks and confirmations are captured in the audit log
Data Persistence, Integrity, and Auto-Revalidation
Given any Fix is applied (single or bulk) When the system writes modifications to CarePulse records (claims, visits, EVV, authorizations) Then operations are executed as an atomic transaction; on failure, all changes are rolled back and a unified error summary is shown And all modified records pass schema and business integrity checks (e.g., non-negative units, no new overlaps, valid timestamps) And an audit entry is created per record with user, timestamp, fields changed, before/after values, and source check ID And Denial Shield automatically revalidates impacted checks and updates their statuses in the UI And the UI reflects the new pass/fail counts without requiring a page refresh
Explainable Errors with Policy Links
"As a compliance lead, I want clear explanations with policy references so that staff understand why an issue fails and how to correct it confidently."
Description

Each validation issue includes a plain-language explanation, the fields involved, the rule that fired, and guidance on how to fix it. Provide links or citations to payer policies, effective dates, and internal rule IDs for traceability. Highlight impacted data in context (e.g., visit notes, patient insurance, authorization records) and show data provenance. Group similar issues, support inline tooltips, and allow downloading a summary of failed checks with references for internal reviews or payer audits.

Acceptance Criteria
Plain-Language Error With Rule, Fields, and Fix
Given a failed preflight check exists for a claim When the user opens the issue detail panel Then the error explanation is presented in plain language at Flesch–Kincaid grade ≤ 10 and ≤ 280 characters And the impacted fields are listed by display label and internal field keys And the fired rule is shown with human-readable title, internal rule ID, and rule version And the UI provides a one-tap safe fix or step-by-step guidance with a preview and cancel option And no placeholder or generic messages (e.g., "Unknown error") are displayed
Payer Policy Links and Effective Dates
Given an issue is governed by a payer policy When the issue detail is displayed Then a clickable URL or citation to the payer policy is shown with policy name, section, and effective date range And the link opens successfully (HTTP 200) or shows an offline cached citation with cache date if connectivity fails And the policy version used matches the service date and payer; if out of effective range, an "out-of-date policy" warning is shown And the internal rule ID lists its mapped policy citation(s) and last verification timestamp
Impacted Data Highlighting and Provenance
Given an issue references specific data When the user views context for the issue Then all impacted data are highlighted in-line within their native contexts (visit note, patient insurance, authorization, EVV log) And each highlighted field shows provenance: source system/type, creator user ID, creation timestamp, and last modified timestamp And non-editable fields are clearly marked read-only with a reason; editable fields support direct correction from this view with audit trail And the audit trail records old value, new value, editor, timestamp, and rule ID for every change initiated via the issue
Similar Issue Grouping and Batch Actions
Given multiple issues of the same type exist across records When the user opens the issues list Then similar issues are grouped under a collapsible header showing type, scope, and total count And duplicate issues on the same field/record are deduplicated to a single entry And the user can apply a batch safe default/fix to the entire group after reviewing a preview of affected records and confirming And batch execution reports per-item success/failure with reasons and does not mask partial failures And the expand/collapse state of groups persists for the user session
Inline Tooltips With Contextual Help
Given a field is flagged by a validation rule When the user hovers, focuses, or taps the info icon Then a tooltip appears containing: rule summary, rule ID, policy citation(s), and ≤ 160-character fix guidance And the tooltip positions to avoid obscuring the target input and remains fully within the viewport on mobile and desktop And the tooltip is keyboard- and screen-reader-accessible (ARIA role=tooltip, reachable by Tab, dismissible via Esc) And tooltip content matches the corresponding issue detail content
Downloadable Audit Summary With References
Given one or more failed checks are present When the user selects Download Summary Then a report is generated within 5 seconds containing: validation run ID, issue IDs, rule IDs/titles, policy links/citations with effective dates, impacted entities, field names/values (redacted where sensitive), guidance, and resolution status And the user can choose CSV or PDF; the file includes branding, generation timestamp, and environment label And all URLs in the report are clickable and resolve (or include cached excerpts with cache date when offline) And the report includes a checksum and is reproducible by re-running the same validation snapshot
EVV & Overlap Detection
"As a scheduler, I want automatic EVV and overlap checks so that visits comply with payer timing rules and do not trigger denials."
Description

Detect and flag EVV window gaps, clock-in/out anomalies, and caregiver/patient visit overlaps according to payer and state rules. Normalize timestamps across time zones and daylight saving changes, de-duplicate late EVV uploads, and reconcile mobile and IoT sensor data. Provide suggested remediations such as splitting visits, adjusting to nearest valid window within tolerance, or marking exceptions with required notes. Integrates with CarePulse scheduling and route data to ensure consistency and prevent double-billing.

Acceptance Criteria
EVV Window Gap Detection and Auto-Remediation
Given a scheduled visit with payer EVV window rules and a configured tolerance_minutes, When preflight validation runs, Then any EVV clock-in/out falling outside the allowed window by <= tolerance_minutes is flagged EVV_WINDOW_GAP with severity Warning and a one-tap "Adjust to nearest valid boundary" fix is offered. Given EVV timestamps fall outside the allowed window by > tolerance_minutes or either clock-in or clock-out is missing, When preflight validation runs, Then the visit is flagged EVV_WINDOW_GAP with severity Blocker, export is blocked, and a "Mark Exception" fix requiring payer-specific note fields is available. Given a user applies the "Adjust to nearest valid boundary" fix, When the fix is confirmed, Then visit start/end are updated, duration is recomputed, route conflicts are revalidated, and an audit entry with before/after timestamps, actor, and reason code is recorded.
Clock-In/Out Anomaly Detection
Given clock-in equals clock-out or computed duration < payer_min_billable_minutes, When preflight validation runs, Then flag EVV_ANOMALY_DURATION with severity Blocker and offer fixes: "Snap to scheduled start/end" or "Mark Exception" (with required notes). Given clock-out precedes clock-in after timezone normalization, When preflight validation runs, Then flag EVV_ANOMALY_ORDER with severity Blocker and offer "Swap timestamps" if within tolerance_minutes; otherwise require Exception. Given EVV location indicates caregiver was outside the required geofence at clock-in or clock-out and the payer mandates geofenced EVV, When preflight validation runs, Then flag EVV_GEOFENCE_MISMATCH with severity as configured per payer and require a note if overridden.
Caregiver and Patient Visit Overlap Detection
Given the same caregiver has two visits overlapping by > allowed_overlap_minutes, When preflight validation runs, Then flag VISIT_OVERLAP_CG with severity Blocker, block export, and offer a "Split visit" fix that creates non-overlapping segments aligned to scheduled visits. Given the same patient has overlapping visits with multiple caregivers and the payer disallows concurrent billing, When preflight validation runs, Then flag VISIT_OVERLAP_PT with severity Blocker and block export until resolved via split, reassignment, or exception per payer rules. Given a user applies "Split visit", When confirmed, Then resulting visits inherit payer, modifiers, and tasks; EVV events are partitioned to the new segments; durations are recalculated; and both visits revalidate with no overlaps.
Time Zone and Daylight Saving Normalization
Given EVV events are recorded across different time zones or span a daylight saving time shift, When preflight validation runs, Then all timestamps are normalized to UTC for computation and displayed/exported in the payer-required local time, preventing false negatives/positives due to conversion. Given a visit spans a DST fallback hour, When preflight validation runs, Then duration is computed using absolute elapsed time and flagged DST_SPAN only where payer requires annotation, with a quick action to add the note. Given the agency time zone setting is changed, When preflight validation reruns, Then validation results remain stable because immutable UTC values are used, and audits show original device offsets.
Late EVV Upload De-duplication
Given multiple EVV events for the same visit share caregiver_id, patient_id, event_type, and timestamp within duplicate_window_seconds, When preflight validation runs, Then duplicates are flagged EVV_DUPLICATE with severity Info, a single canonical event is retained, and duplicates are excluded from validation and export. Given two EVV events differ by <= jitter_seconds and share device_id or sensor_id, When preflight validation runs, Then they are treated as duplicates per heuristic, merged, and an audit record stores source event IDs and merge reason. Given a user chooses to unmerge a duplicate from the audit view, When confirmed, Then the event is restored and all related validations are recomputed.
Mobile/IoT Reconciliation and Source of Truth
Given mobile EVV and IoT sensor readings exist for the same visit with start/end differences <= reconciliation_tolerance_minutes, When preflight validation runs, Then the configured source_of_truth (mobile|iot|latest) is applied, the secondary source is retained for audit, and a RECON_APPLIED reason is recorded. Given differences exceed reconciliation_tolerance_minutes, When preflight validation runs, Then flag EVV_SOURCE_CONFLICT with severity as configured per payer and offer fixes: "Use Mobile", "Use IoT", or "Mark Exception" with required notes. Given a reconciliation fix is applied, When confirmed, Then schedule, route, and export reflect the selected timestamps; EVV and visit durations are recalculated; and an immutable audit entry stores both source values, user, timestamp, and reason.
Audit Log & Compliance Reporting
"As a compliance officer, I want detailed audit logs and exportable reports so that we can prove due diligence and defend claims during audits."
Description

Maintain an immutable audit trail for each validation run and applied fix, capturing user, timestamp, original and updated values, associated rule version, and reason. Generate one-click, audit-ready reports (PDF/CSV) that summarize checks performed, failures found, actions taken, and residual warnings, suitable for internal QA and external payer audits. Expose filters by payer, date range, rule, user, and outcome, and enforce retention policies aligned with regulatory requirements.

Acceptance Criteria
Immutable Audit Log for Validation Runs and Fixes
- On completion of each validation run, the system creates exactly one audit header record with fields: validation_run_id, triggered_by (user_id or system), timestamp_utc (ISO 8601), payer_id, ruleset_version, input_parameters, outcome_summary. - For each manual one-tap fix or auto-fix, the system creates an audit detail record with: entity_type, entity_id, field_name, original_value, updated_value, reason_code, reason_text, performed_by (user_id or automation actor), timestamp_utc, ruleset_version, validation_run_id (correlation), outcome. - Audit storage is append-only: no update or delete is possible via UI or API; attempts return 403 and are themselves logged with user_id and timestamp_utc. - Each audit record includes content_hash and previous_hash to enable tamper detection; a daily integrity check job verifies the chain and records Pass/Fail status. - All timestamps are stored in UTC; UI may display localized times but raw stored values remain UTC.
One-Click Audit-Ready Report Generation (PDF/CSV)
- Given any active filters (or none), when a user clicks "Generate Report," then a PDF and a CSV are produced within 15 seconds for result sets ≤ 10,000 rows; larger sets trigger an async job with completion notification. - Reports include a header with: organization name, payer(s), date range, ruleset_version(s), generated_by, generated_at_utc, and a unique report_id. - Summary section contains counts: validations_run, checks_performed, failures_found, fixes_applied (manual vs auto), residual_warnings. - Detail section lists each failure/action with columns: validation_run_id, entity_id, rule_id, ruleset_version, reason_code, reason_text, original_value, updated_value, performed_by, timestamp_utc, outcome. - File naming convention: CarePulse_DenialShield_Audit_{payerOrAll}_{YYYYMMDD-YYYYMMDD}_{report_id}.pdf/csv; PDF is paginated with page X of Y and includes a footer confidentiality notice.
Filterable Audit Views by Payer, Date Range, Rule, User, and Outcome
- UI exposes filters: payer (multi-select), date range (absolute + quick presets), rule (by id/name and version), user (searchable), and outcome (Pass, Fail, Fixed, Warning). - Filters are combinable; results reflect the intersection of selected values; an empty-state message appears for zero matches. - Default date range is last 30 days; the system persists the user's last-used filters across sessions. - Applying or changing filters returns first-page results within 3 seconds for datasets up to 50,000 records using server-side pagination. - Exports (PDF/CSV) honor active filters exactly; exported row count matches the UI-reported count for the same filter set.
Retention Policy Enforcement and Legal Hold Controls
- A global default retention period (e.g., 7 years) is enforced; payer-specific overrides can be set but not below the configured global minimum; invalid configurations are blocked with a clear error message. - A daily purge job permanently deletes audit records older than the effective retention period; the job creates its own audit entry summarizing records purged (counts, date range, payer scope). - Compliance Admins can place or remove legal holds scoped by payer, rule, user, or tag; records under legal hold are excluded from purge until the hold is removed; hold changes are audited. - Optional "export-before-purge" generates and stores a final CSV/PDF bundle prior to deletion and links the bundle in the purge audit entry. - The retention settings UI displays effective retention values, next scheduled purge time, and last purge outcome (Pass/Fail) with timestamp_utc.
Tamper Evidence and Access Control for Audit Data
- Only users with roles Compliance Admin or Operations Manager can view or export audit logs; other roles receive 403 Forbidden; all access attempts (allowed or denied) are logged with user_id and timestamp_utc. - No edit/delete controls or endpoints exist for audit records; any correction requires a new fix operation that creates a new audit entry linked by correlation_id. - A daily integrity verification alerts Compliance Admins on any hash-chain mismatch and records an incident entry with impacted record ids. - Report downloads use signed URLs over TLS 1.2+ that expire within 15 minutes; expired links return 410 Gone. - Search/view/export actions capture and store the filters used, ensuring reproducibility of accessed views.
Export Traceability to Underlying Audit Entries
- Every generated report has a unique report_id; each line item includes validation_run_id and fix_id (if applicable) that resolve to specific audit records. - In the UI, clicking validation_run_id or fix_id in a report navigates to the corresponding audit detail view without loss of active filter context. - CSV exports include stable identifiers: report_id, validation_run_id, fix_id, entity_id, rule_id, ruleset_version; identifiers are consistent across UI, API, and export files. - When a fix is reverted or superseded, a new audit entry is created with relation_type referencing prior fix_id; the fix history view displays the complete chain in chronological order. - An authenticated API endpoint /audit/{id} returns the full JSON payload for an audit entry, including content_hash and previous_hash, subject to role-based access control.

Smart Attach

Automatic attachment bundling that assembles the exact supporting documents each payer demands—visit notes, orders, signatures, EVV logs—sequenced and named per portal rules. It can merge into a single PDF or split files as required, apply redactions, and compress for size limits, ensuring submissions are complete and compliant.

Requirements

Payer Rules Engine
"As a billing specialist, I want to select a payer and have the system automatically know which documents, names, order, and format are required so that every submission matches portal rules without manual rework."
Description

Configurable engine to define and manage per-payer attachment requirements, including required document types (visit notes, orders, signatures, EVV logs, IoT readings), sequencing, file naming templates, merge/split logic, accepted formats, maximum file sizes, redaction policies, and delivery channels. Provides a self-serve admin UI with versioning, effective dates, cloning, validation tests on sample claims, and change logs. Integrates with CarePulse data models to map document sources and with Smart Attach assembly to evaluate rules at runtime, ensuring every package conforms to payer-specific portal rules.

Acceptance Criteria
Define Required Docs & Sequencing for Payer
Given I am an admin creating a new rule for payer "Acme Health" When I select required document types including Visit Note, Physician Order, Patient Signature, EVV Log, and IoT Readings and set an explicit sequence Then the rule saves successfully with the exact required set and sequence persisted And saving a rule with a missing sequence or duplicate sequence index is rejected with a validation error And saving with zero required document types is rejected with error "At least one document type is required"
Attachment Assembly Compliance (Merge/Split, Naming, Formats, Size)
Given a payer rule that specifies a single merged PDF for Visit Note and EVV Log and a separate TIFF for Physician Order, with file naming template "{memberId}_{dosStart}_{docType}_{seq}" and max file size 10 MB per file When the engine assembles attachments for a claim with memberId "M123" and DOS 2025-09-01 Then it outputs exactly two files matching the rule And the merged PDF contains the Visit Note and EVV Log in the defined sequence And the Physician Order is output as TIFF And filenames match the template, contain no illegal characters, and do not exceed the payer's max filename length And each file is compressed as needed to be <= 10 MB without altering visual content And if compression cannot meet size, the process fails with error code "SIZE_LIMIT_EXCEEDED" and lists the offending file
Redaction Policy Application & Audit
Given a payer rule that requires redaction of SSN and GPS coordinates When attachments are generated Then all SSN patterns and GPS coordinates are redacted per policy masks And the exported package contains only redacted documents And an internal unredacted copy remains accessible only to authorized roles And a redaction audit log is recorded with payer, claim ID, rule version, redaction fields, timestamp, and actor
Versioning & Effective Date Resolution
Given two versions of a payer rule exist: v1 effective through 2025-08-31 and v2 effective from 2025-09-01 When evaluating a claim with service date 2025-08-30 Then v1 is applied When evaluating a claim with service date 2025-09-05 Then v2 is applied And editing an active rule creates a new draft version without mutating prior versions And only one version per payer is active for any given effective date range, with automatic overlap validation and rejection on save if overlaps exist
Data Source Mapping to CarePulse Models
Given the admin maps document types to CarePulse data sources (e.g., Visit Note -> visit.note.pdf, EVV Log -> evv.log.pdf, Patient Signature -> signature.capture.png) When a rule is saved Then the mappings are validated to ensure each required document type has a resolvable source And at runtime the engine successfully locates and retrieves the mapped sources for assembly And if a mapped source is missing at runtime, the assembly fails with error code "SOURCE_MISSING" identifying the document type and data path
Validation Test Runner on Sample Claims
Given an admin selects a payer rule and a set of sample claims When they run "Validate on Samples" Then the system executes the rules and returns Pass/Fail per claim with a list of violations including code, message, and affected file/document And the admin can download a CSV/JSON report of results And a validation run record is stored with inputs, rule version, outcomes, and timestamp
Delivery Channel Packaging & Transmission
Given a payer rule configured for SFTP delivery with path "/inbound/claims", PGP encryption, and 3 retry attempts When a package is generated for that payer Then the final files are encrypted with the configured key, uploaded to the specified SFTP path, and a checksum is recorded And on transient network failure the system retries up to 3 times with exponential backoff before marking the transmission as failed And transmission success or failure is logged with correlation ID and is visible in the change/log history
Auto Assembly Pipeline
"As an operations manager, I want visit attachments to assemble automatically after each visit or batch by date so that my team spends less time compiling files and can submit sooner."
Description

Event-driven and batch-capable pipeline that aggregates required artifacts for a visit or claim, renders them to payer-specified formats, applies sequencing and naming, and outputs either a single merged PDF or multiple files per rules. Supports manual and automatic triggers, idempotent re-runs, progress states, retries, and background queuing. Fetches inputs from visit notes (including voice-to-text), physician orders, caregiver and patient signatures, EVV logs, and optional IoT data. Exposes preview and approve actions, and posts results to downstream delivery or download.

Acceptance Criteria
Manual Single-Visit Assembly to Single PDF (Payer-Specific)
Given a completed visit with all required artifacts available and a payer profile that requires a merged PDF When a user manually triggers the Auto Assembly Pipeline for that visit and payer Then the pipeline aggregates visit notes (including the generated voice-to-text transcript), physician orders, caregiver and patient signatures, EVV logs, and optional IoT summary And renders a single PDF in the exact sequence defined in the payer profile with section bookmarks matching section names And applies configured redactions as true redactions (content removed, not just overlay) And compresses the PDF so the file size is less than or equal to the configured payer limit And names the file exactly per the payer naming template And marks the run state as Awaiting Approval upon success
Event-Driven Auto Assembly on Visit Completion (Multi-File Output)
Given a payer profile that requires split files and a visit transitions to Completed When the visit completion event is emitted Then the pipeline enqueues a job for that visit within 5 seconds And the run progresses through states Queued -> Running -> Awaiting Approval without manual intervention And it outputs one file per configured document group with filenames matching the payer templates and each file size within configured limits And if any required artifact is missing, the run is set to Missing Inputs with an explicit missing list, no files are posted, and the owning operations manager is notified
Nightly Batch Assembly for Pending Claims (Queue and Concurrency)
Given a batch window of 02:00–03:00 and batch settings batchSize=B and concurrency=C are configured When the nightly batch trigger starts at 02:00 Then up to B eligible visits/claims in Ready for Assembly are enqueued And no more than C jobs run concurrently And each job applies its claim’s payer profile rules for formatting, sequencing, naming, redaction, and compression And a batch summary is recorded with counts of queued, succeeded, failed, missingInputs, and average duration And duplicate enqueues for the same visit/claim are prevented during the window
Idempotent Re-run Produces Stable Outputs and No Duplicates
Given a visit/claim X has been assembled with input set I under payer profile P When the pipeline is re-run for X with unchanged inputs I and profile P Then the produced files are byte-identical to the prior run with the same filenames and checksums And no duplicate postings occur to downstream destinations And the run history records a re-run with outcome No Changes and references the original artifact set When any input changes (e.g., updated order) Then only the affected outputs are regenerated, the version number increments, and superseded artifacts are archived without breaking existing download links
Robust Retry and Failure States for External Fetches
Given an external source (e.g., Orders API) returns a transient 5xx error during artifact fetch When the pipeline attempts to retrieve required artifacts Then it retries up to R attempts with exponential backoff (e.g., 1s, 2s, 4s, …) without blocking other jobs And on eventual success continues the assembly automatically And on exhausting retries marks the run state as Failed - Fetch with a machine-readable error code and human-readable message And logs exclude PHI/PII while retaining correlation IDs and timestamps And the UI exposes a Retry action that re-queues the job once, respecting backoff and idempotency
Preview, Redaction, and Approval Gate Before Delivery
Given a run is in Awaiting Approval When a user opens the preview Then each output file is displayed in the configured order with a checklist of required documents And redaction regions are visibly indicated and verified as non-selectable/non-searchable in the preview And the user can download watermarked previews but cannot download final files until approval When the user clicks Approve Then the pipeline posts the outputs to the configured downstream delivery connector or exposes a signed download link And records an immutable audit entry with approver identity, timestamp, destination, checksums, and filenames And the run state transitions to Posted; further changes require Create New Version
EVV, Signatures, Voice Notes, and Optional IoT Data Inclusion
Given a visit has EVV logs, caregiver and patient e-signatures, a voice note, and optional IoT vitals; and the payer profile requires EVV, signatures, and visit notes When the assembly runs for that payer profile Then the output includes an EVV time and location summary derived from raw logs And embeds caregiver and patient signatures with capture timestamps and signer identity metadata And includes the voice-to-text transcript under Visit Notes; if the transcript is pending, the run waits up to T minutes or marks Missing Inputs And includes an IoT vitals summary only if the data is available; its absence does not block runs unless the payer profile requires it And if any required item per the payer profile is missing, the run enters Missing Inputs with an explicit missing list and no outputs are posted
Redaction & PHI Masking
"As a compliance officer, I want sensitive fields automatically redacted per payer policy so that we avoid PHI disclosures while still meeting documentation requirements."
Description

Rule-based redaction service that masks sensitive fields and sections according to payer policies (e.g., SSN, partial DOB, addresses on EVV maps), supporting both pattern-based and field-aware redactions. Provides a preview overlay, audit annotations, and irreversible flattening upon finalize to prevent removal. Integrates with the rules engine to vary redactions by payer and with the assembly pipeline to apply redactions before compression and delivery.

Acceptance Criteria
Pattern-Based Redaction by Payer Policy
Given a document containing SSNs (NNN-NN-NNNN) and DOBs (MM/DD/YYYY) and payer = Payer A with policy "mask SSN to last 4; show DOB month/year only" When redaction is applied Then all SSNs are replaced with "XXX-XX-####" showing only the last 4 digits and are non-selectable in the output And all DOBs display as "MM/YYYY" with the original day removed and the original full date absent from the PDF text layer And each redaction is logged with ruleId, patternId, page, and coordinates
Field-Aware Redaction on EVV Map Addresses
Given an EVV visit document with a map section and structured field patient_address present When payer policy requires hiding patient_address on map exports Then the address label and any nearby textual address elements are fully obscured while route lines, pins, and timestamps remain visible And copying or searching the output for the patient_address yields no results And audit logs include fieldKey=patient_address, detectionType=field, page, and coordinates
Redaction Preview Overlay and Audit Annotations
Given a user views a draft package and toggles Preview Redactions on When the preview is displayed Then all redaction regions are overlayed with semi-transparent masks labeled with ruleId and reason And clicking a mask reveals ruleId, author, timestamp, detectionType, and maskedValue sample And toggling Show/Hide Annotations affects labels only; masks remain unchanged And exporting a preview PDF excludes annotations by default
Irreversible Flattening on Finalize
Given a draft with pending redactions When the user clicks Finalize Then the output PDF has redactions burned-in (no editable annotation or optional content layers remain) And text selection within redacted areas returns nothing and full-text search for original values returns zero hits And attempting to remove masks in a standard PDF editor does not reveal original content And only the flattened artifact proceeds to delivery; any pre-flattened copies are deleted
Payer-Specific Rule Selection via Rules Engine
Given payer = Payer B and documents include visit notes, orders, and EVV logs When the redaction step runs Then the rule set mapped to Payer B (ruleSetVersion recorded) is applied and only applicable rules execute And switching payer to Payer C and re-running yields a different redaction set per mapping And the audit record includes payerId, ruleSetVersion, and a cryptographic hash of the rules used
Assembly Pipeline Order: Redact Before Compress and Deliver
Given a package configured to merge and compress When the assembly pipeline executes Then steps occur in order: Detect→Redact→Flatten→Merge/Split→Compress→Deliver (as recorded in pipeline logs) And the delivered artifacts remain redacted after compression (verified by post-compression OCR/pattern scan = zero matches) And no unredacted intermediate files are accessible; temp files are destroyed within 10 minutes of job completion
Post-Redaction PHI Leakage Verification
Given a finalized, flattened package When automated OCR and pattern scans run for SSN, full DOB, and payer-specific disallowed fields Then zero matches are returned across all pages, embedded objects, and metadata And the scan report is attached to the audit record with timestamp, tool version, and pass/fail status And any non-zero match blocks delivery and marks package status "Redaction Failed" with an actionable error
Size Optimization & OCR
"As a billing specialist, I want large bundles to be compressed and text-searchable within portal size limits so that uploads succeed and reviewers can quickly find information."
Description

Automated optimization that compresses and linearizes PDFs/images to meet payer-specific size limits while preserving legibility. Applies DPI tuning, grayscale/mono conversion, font subsetting, and duplicate page/image deduplication. Adds full-text OCR layers for scanned content to keep bundles searchable post-compression. Supports PDF/A output when required and strips non-essential metadata to reduce file size and PHI exposure.

Acceptance Criteria
Payer Size Limit Compliance with Legibility Preservation
Given a Smart Attach bundle targeted to a configured payer profile with a maximum file size per upload and sequencing rules When size optimization executes during bundle assembly Then no individual output file exceeds the payer’s configured size limit And if the bundle would exceed the limit as a single file, it is automatically split according to the payer’s configured sequence and naming rules And the output PDFs are linearized (Fast Web View) unless prohibited by the selected conformance profile And for pages containing text, effective resolution is ≥ 200 DPI for grayscale text and ≥ 300 DPI for bilevel text, or text-region SSIM ≥ 0.95 versus source And standard Code-128 and QR barcodes present in the source remain decodable by a reference library And handwritten and typed signatures remain legible without broken strokes
Post-Compression OCR Searchability and Integrity
Given a bundle containing scanned pages in English or Spanish with no text layer When OCR is applied during optimization Then each scanned page in the output contains an invisible, selectable text layer aligned to the page And a random sample of 20 unique words per document is searchable with ≥ 90% match rate against ground truth And normalized per-line Levenshtein distance between extracted text and ground truth is ≤ 0.10 on the sample And text within redacted regions is not present in the text layer and is not returned by search And the PDF’s language setting reflects the OCR language used
Conditional PDF/A Output and Validation
Given the payer profile requires PDF/A-2b (or other specified PDF/A level) When the output is generated Then the PDF conforms to the specified PDF/A level and passes veraPDF validation with 0 errors and 0 warnings And all fonts are embedded and subset And all color spaces are device-independent with embedded ICC profiles And prohibited features (JavaScript, multimedia, external references) are absent And linearization is applied when compatible with the chosen PDF/A level
Metadata Minimization and PHI Exposure Reduction
Given input files may contain embedded metadata When optimization completes Then non-essential metadata fields (e.g., Author, Creator, Producer, GPS, application versions, edit history) are removed And only the approved whitelist fields remain [Title, Subject, Keywords (if configured), PDF/A-required XMP schema, CreationDate] And there are no embedded files, hidden layers, or annotations unless explicitly required by payer rules And exiftool inspection shows no keys outside the whitelist And the document ID is regenerated as a non-PII UUIDv4
Duplicate Asset Deduplication and Font Subsetting
Given the bundle includes repeated images/logos and fonts with unused glyphs When optimization runs Then identical image resources are stored once and referenced multiple times (no duplicate XObject streams with identical hashes) And total output size is reduced by ≥ 10% on a test fixture with known duplicates while maintaining visual similarity SSIM ≥ 0.98 page-wise And fonts are subset to only used glyphs; preflight reports no unused glyph data
Adaptive DPI and Color Mode Conversion Rules
Given the payer profile does not require color for standard documents When optimization evaluates each page Then text-only pages are rendered as bilevel at 300–400 DPI with MRC or equivalent segmentation to prevent halo artifacts And mixed-content pages are rendered in grayscale at 150–200 DPI; continuous-tone photo regions retain sufficient DPI to avoid visible pixelation at 200% zoom And pages with ≥ 5% color coverage or flagged as color-required remain in color at 150–200 DPI And effective DPI and color space are recorded in the output; 10 randomly sampled pages meet the configured ranges And barcode decodability and signature legibility are maintained after conversion
Completeness Validator
"As a QA lead, I want a preflight check that flags missing signatures, orders, or EVV gaps so that we can fix issues before submitting and prevent denials."
Description

Preflight validation that checks package completeness against payer rules before assembly and delivery: required documents present, valid signatures, current orders, EVV timestamps within required windows, correct naming/sequence, and file size/format compliance. Produces actionable errors and warnings with suggested fixes and blocks submission until critical issues are resolved. Integrates with notifications to alert responsible users and with tasking to create follow-ups.

Acceptance Criteria
Required Documents Presence by Payer and Context
Given a package prepared for payer P with line of business L, visit type V, and service date D When the user runs the Completeness Validator Then the system cross-references the configured rule set for P/L/V/D and enumerates each required document category (e.g., visit note, physician order, patient signature, caregiver signature, EVV log) as Present or Missing And Missing items are marked Error with a suggested action (Attach, Request Signature, Generate EVV Export, or Link Order) And Optional-but-recommended items are marked Warning with rationale per rule And The validator identifies duplicate or conflicting documents and flags them as Warning with a suggested resolution And Validation completes within 3 seconds for packages containing up to 50 documents
Signature Authenticity and Date Validity
Given one or more documents that require patient and/or caregiver signatures under payer P's rules When the validator inspects signatures Then each signature is verified as: (a) platform e-signature with intact certificate and tamper-evident seal, or (b) image-based signature with captured date/time and signer identity metadata And The signature date meets payer-specific timing rules relative to service date (e.g., within allowed pre/post windows as configured) And Caregiver signature identity matches the assigned caregiver for the visit; mismatch is an Error And Missing, tampered, or out-of-window signatures are Errors; illegible or low-resolution signatures are Warnings with a suggested recapture action
Order Currency and Coverage Window Check
Given payer P requires a current physician order/plan of care for visit type V When validating the package for service date D Then an order linked to the patient covers D (effective_start ≤ D ≤ effective_end) and is associated with an authorized clinician with NPI present And The order is signed and dated according to payer P's timing rules And If no covering order exists, an Error is raised with actions to Link Existing or Request New Order And If the order expires within the payer-defined threshold, a Warning is raised with an action to Schedule Renewal Task
EVV Timestamp and Geofence Compliance
Given payer P enforces EVV rules for the service When validating EVV data for the visit Then clock-in and clock-out timestamps exist and are within the payer-configured tolerance relative to the scheduled visit window And GPS coordinates at clock-in/out fall within the configured geofence radius of the patient service location And Manual edits to EVV are detected and evaluated against payer allowances; disallowed edits are Errors, allowed-but-noted edits are Warnings And Missing EVV, out-of-window times, or out-of-geofence points are Errors with suggested remediation (add attestation, correct times, or re-capture)
File Naming, Sequencing, and Splitting/Merging Compliance
Given payer P defines specific file naming templates, sequence ordering, and split/merge rules When the validator simulates package assembly Then each file name matches the payer template tokens (e.g., MemberID_Claim#_Seq.pdf) with zero illegal characters and correct zero-padding And The sequence is contiguous, correctly ordered by rule (e.g., orders → notes → signatures → EVV), and free of duplicates And Split vs. merged output conforms to payer policy; single-PDF or multi-file sets are previewed with exact counts and names And Any deviation (bad token, out-of-order, duplicate name, wrong containerization) is flagged as Error with a one-click Fix Naming/Order option
Size, Format, Compression/OCR, and Redaction Readiness
Given payer P imposes file size and format limits and redaction requirements When the validator evaluates final assembled artifacts Then each output file meets size limits per payer (e.g., <= configured MB) and format requirements (PDF/PDF-A, TIFF, or portal-specific) And Images meet minimum quality thresholds (≥200 DPI or payer-configured) and PDFs are text-searchable via OCR when required And Required sensitive fields (e.g., SSN beyond last4, non-pertinent diagnoses) are redacted according to payer rules, and redactions are non-removable (content not selectable, annotations flattened) And If Auto-fix is enabled, the validator attempts compliant compression/OCR/redaction and re-validates; persistent violations remain Errors with guidance And Metadata sanitation (author, GPS, hidden layers) is verified where mandated; failures are Errors
Actionable Results, Blocking Submission, Notifications, and Tasking
Given validation results are generated for a package When the package contains any Errors Then the Submit action is disabled until all Errors are resolved and the error count is visible to the user And Warnings do not block submission but are listed with justifications and optional acknowledge checkboxes And Each Error/Warning includes a plain-language description, the rule reference, and a suggested next action with deep links (e.g., Request Signature, Rename Files, Fix EVV) And On validation completion, responsible users are notified via in-app and email per notification settings with a summary of blocking issues And A follow-up task is auto-created for the owning operations manager with due date based on payer SLA and includes the error list; completion of tasks triggers auto re-validation
Portal Export & Delivery
"As a billing specialist, I want to export the assembled package directly to the payer’s channel or download it with correct filenames so that submission is fast and error-free."
Description

Delivery module that outputs packages per payer channel: direct API/SFTP where available, or secure download for manual portal uploads. Enforces payer naming conventions, handles rate limits and timeouts, and supports automatic retries with exponential backoff. Captures delivery receipts or upload confirmations when provided and encrypts files at rest and in transit. Provides a submission log and integrates with CarePulse claims to update submission status.

Acceptance Criteria
Direct API Delivery with Naming Enforcement and Receipt Capture
Given a payer configured for direct API delivery with valid credentials and a prepared Smart Attach package When a user initiates submission to that payer Then the system transmits the package over TLS 1.2+ to the configured endpoint And each file name conforms exactly to the payer’s naming convention pattern for that submission And a 2xx response with a receipt or submission identifier is received within 30 seconds And a submission log entry is written capturing payer, endpoint, HTTP status, receipt ID, file list with SHA-256 checksums, actor, and timestamps And the related CarePulse claim is updated to status "Submitted" within 5 seconds of receipt capture
SFTP Delivery with Resilient Transfers and Logging
Given a payer configured for SFTP delivery with host, port, username, key-based authentication, target directory, and naming rules When a user initiates submission to that payer Then the system establishes an SFTP session using strong ciphers and key-based authentication And uploads each file using an atomic pattern (upload temp -> verify size -> rename to final filename) And verifies remote file size matches local size for each file after rename And enforces payer file naming conventions for each uploaded file And writes a submission log including per-file size, checksum, transfer duration, and overall status And updates the related claim to status "Submitted" only after all files upload and verify successfully
Secure Download Package for Manual Portal Uploads
Given a payer without a direct API or SFTP channel and a prepared Smart Attach package When a user selects "Generate Secure Download" Then the system produces a downloadable package within 10 seconds And the download link is HTTPS, role-restricted, tokenized, and expires after 72 hours or 5 downloads (whichever comes first) And files are named per payer naming conventions and compressed to meet payer file-size limits when applicable And the package is stored encrypted at rest (AES-256) until link expiry And a submission log entry is created with link creation time, expiry, file list and checksums And the related claim is set to status "Ready for Portal Upload"
Automatic Retries with Exponential Backoff on Rate Limits and Timeouts
Given a direct delivery attempt (API or SFTP) returns a retryable condition (HTTP 408/429/5xx, connection reset, or timeout) When automatic submission is enabled Then the system retries up to 5 attempts with exponential backoff starting at 2s, doubling each attempt, capped at 60s, with ±20% jitter And honors Retry-After headers when present And ceases retrying immediately on non-retryable 4xx responses (except 408/429) And ensures idempotency so duplicate submissions are not created (via idempotency keys or receipt de-duplication) And logs each attempt with timestamp, error code, and scheduled next attempt And marks the submission as "Submitted" on success or "Failed" with surfaced error details after final attempt and notifies the submitting user
Delivery Confirmation Ingestion and Claim Status Update
Given a payer that provides delivery confirmation via webhook callback or polling API When a confirmation event or poll response is received for a prior submission Then the system records the confirmation metadata (confirmation/reference ID, received timestamp, mapped status, and payer message) and associates it with the submission log And stores the raw payload for audit with integrity hash And updates the related claim status to "Accepted" or "Rejected" within 60 seconds based on mapped payer codes And, on rejection, captures the reason code/text and exposes it in the submission details view
End-to-End Encryption and Integrity for Exported Packages
Rule: All exported files are encrypted at rest using AES-256 with KMS-managed keys; keys are rotated per security policy and access controlled via RBAC Rule: All deliveries use secure transport (TLS 1.2+ for API/HTTPS, modern ciphers for SFTP) with certificate/host key validation Rule: For each submission, SHA-256 checksums are recorded pre-transfer and, where supported, validated post-transfer; mismatches fail the submission and trigger retry Rule: Temporary staging files are securely deleted within 15 minutes after successful submission or link expiry, with deletion events logged Rule: Access to secure download links requires authentication and is audited (who, when, IP, user agent)
Audit Trail & Reproducibility
"As an auditor or manager, I want a complete, time-stamped record of how each submission was assembled so that we can prove compliance and reproduce packages on demand."
Description

Comprehensive traceability for each package: immutable record of inputs used, rule version applied, transformation steps (redactions, merges, compressions), timestamps, user actions, and cryptographic hashes of outputs. Enables one-click regeneration using the original rule version and data snapshot. Provides an exportable audit report suitable for external reviews and internal QA, integrated with CarePulse’s reporting module.

Acceptance Criteria
Immutable Audit Record for Package Creation
Given a Smart Attach package is finalized When the audit record is written Then the system stores a write-once, immutable record containing: package_id, payer_id, patient_id, visit_ids, rule_set_id, rule_version, created_at (UTC ISO 8601 with ms), created_by (user_id, role) And the record lists all inputs with: source_type, source_id, filename, byte_size, SHA-256 hash And the record includes the ordered transformation steps with: index, step_type (redact|merge|compress|rename), tool_name, tool_version, parameters, started_at, completed_at And any attempt to modify or delete the audit record is blocked with HTTP 403 and a security event "AUDIT_IMMUTABILITY_BLOCKED" is logged
Output Hash Manifest and Integrity Verification
Given a package has been finalized When output files are produced Then the system generates and stores a manifest enumerating each output filename and its SHA-256 hash And invoking "Verify Integrity" recomputes hashes and they match the stored manifest for 100% of outputs And if any mismatch occurs, the package integrity status is set to "Corrupted", exports are disabled, and an alert is issued to Compliance
One-Click Regeneration Produces Identical Outputs
Given a completed package with a stored data snapshot and recorded rule_version When an authorized user clicks "Regenerate Original" Then the system replays the recorded steps using the stored snapshot and the exact recorded rule_version And each regenerated output's SHA-256 matches the original manifest for all files And if the recorded rule_version is archived, it is loaded from the rules registry without modification And if any snapshot artifact is missing, regeneration halts with error code REGEN_SNAPSHOT_MISSING, makes no changes to the original package, and logs the failure
Transformation Step Traceability and Parameters
Given a package includes redactions, merges, and/or compression When viewing the audit trail for the package Then each step displays: index order, step_type, input_artifact_ids, output_artifact_id, tool_name, tool_version, parameters summary, started_at, completed_at And redaction steps record page numbers and bounding boxes for each redaction region And compression steps record pre_size_bytes, post_size_bytes, and compression_ratio And merge steps record the exact input file sequence and page ranges merged
User Action Logging with UTC Timestamps
Given any user performs a package-related action (create, regenerate, export, verify) When the action is committed Then an audit entry is recorded with: user_id, role, auth_method, session_id, ip_address, user_agent, action_type, action_details, timestamp in UTC ISO 8601 with millisecond precision And all entries for a package are stored in strict chronological order without time gaps or duplicates
Exportable Audit Report via Reporting Module
Given a completed package exists When a permitted user selects "Export Audit Report" Then the system generates within 10 seconds: (1) a human-readable PDF and (2) a machine-readable JSON, each covering inputs, rule_version, step trace, user actions, timestamps, output manifest with hashes, integrity verification results, and access log summary And both artifacts are attached to the package record, available in the reporting module, and downloadable with stable filenames And each report artifact includes its own report_id and SHA-256 hash And exported reports exclude sensitive redacted content, including only redaction metadata
Access Control and Audit Access Logging
Given role-based access control is configured When an unauthorized user attempts to view or export an audit report Then the request is denied with HTTP 403 and a generic error message that reveals no sensitive details And an access_denied event is written to the audit log with user context And when an authorized Operations Manager or Compliance Officer performs the same action, the export/view succeeds and the access is logged with timestamp and user_id

Multi-Channel Router

Intelligently chooses the right lane for each payer—EDI 837, CSV, SFTP, API, or guided portal upload—then manages batch sizes, throttling, and cut-off windows. Built-in retries and confirmations reduce manual juggling, so teams submit faster with fewer late-file risks.

Requirements

Channel Auto-Selection Engine
"As a billing operations manager, I want the system to automatically choose the correct submission channel per payer so that claims are routed correctly without manual effort or delays."
Description

Implements an intelligent routing layer that selects the optimal submission lane per payer (EDI 837, CSV, SFTP, API, or guided portal upload) using configurable rules, payer preferences, claim type, volume, and service date constraints. The engine evaluates eligibility, fallbacks, and manual overrides, logs the decision path, and exposes the chosen route to downstream orchestration. It integrates with CarePulse’s scheduling and compliance modules to prioritize on-time submissions and supports idempotent routing to prevent duplicates when resubmitting.

Acceptance Criteria
Auto-select EDI 837 for high-volume payer before cut-off
Given payer P1 is configured to support EDI_837 with preferRoute=EDI_837 when claimType in ["Professional"] and dailyVolumeThreshold >= 50 and cutoffTime = 17:00 ET And the current time is 16:30 ET on 2025-09-05 And batch B123 contains 60 Professional claims for payer P1 with service dates within the allowed window When the engine evaluates routing for batch B123 Then it selects routeType = EDI_837 for batch B123 And computes batchSize <= payerConfig.maxBatchSize And records decisionPath including evaluated rules, inputs, and final reasonCode = "preferred-route-and-volume-threshold-met" And exposes the route to downstream orchestration with routeId, batchId, payerId, cutoffTimestamp, and reasonCodes
Fallback routing when primary channel ineligible or cut-off missed
Given payer P2 is configured with primary = API and secondary = SFTP and cutoffTime = 17:00 ET And API eligibility requires tokenValid = true And the current time is 17:05 ET OR tokenValid = false When the engine evaluates routing for batch B456 for payer P2 Then it selects routeType = SFTP for batch B456 And sets fallback = true and primaryIneligibleReason in ["cutoff-missed", "api-token-invalid"] And logs decisionPath with failed-checks (names and timestamps) And exposes the selected route with an SFTP-appropriate retryPolicy
Manual override of selected channel with audit trail
Given user U1 with role = Operations Manager creates a manual override to routeType = GuidedPortal for payer P3 effective now through 2025-09-12 with justification = "Portal maintenance window" And the override does not violate compliance guardrails (payer P3 supports GuidedPortal) When the engine evaluates routing for batch B789 for payer P3 Then it selects routeType = GuidedPortal regardless of default rules And logs override details: userId, role, timestamp, justification, effectiveWindow And marks decisionPath overrideApplied = true And if the override is deactivated later, the next evaluation reverts to rules-based selection
Idempotent routing prevents duplicate submissions on retry
Given batch B321 for payer P4 was routed on 2025-09-05T10:00:00Z with decisionHash = H1 and idempotencyKey = K1 and no rule configuration changes since When the same batch payload is re-evaluated within 24 hours Then the engine returns the same routeType and idempotencyKey = K1 without creating a new submission record And logs idempotentHit = true with a reference to prior decisionId And emits no duplicate downstream orchestration event And if rules or batch content have changed since the prior decision, the engine generates a new decisionHash and idempotencyKey and emits a new event referencing priorDecisionId with relation = "resubmission"
Decision path logging with rule evaluations and data inputs
Given any routing evaluation for a batch When a route decision is made Then the engine stores an immutable log entry containing: decisionId, batchId, payerId, claimTypeSet, volume, serviceDateSpan, eligibleChannelsEvaluated, ruleOutcomes (name, input, result), cutoffEvaluation, overrideStatus, fallbackStatus, chosenRoute, reasonCodes, timestamps And the log entry is retrievable via GET /routing-decisions/{decisionId} within 2 seconds and redacts PHI per policy And the log entry is retained for at least 365 days
Prioritize on-time submissions using scheduling and compliance integration
Given claims in batch B654 are linked to visits with complianceDueAt timestamps from CarePulse scheduling/compliance And per-channel estimated time-to-ack (ETA) metrics are available: EDI_837 = 90m, SFTP = 180m, API = 45m And at least one claim in B654 has complianceDueAt within 2 hours When eligible channels include EDI_837, SFTP, and API Then the engine selects the channel with the shortest ETA that keeps expected ack before complianceDueAt (API in this example) And logs priorityScore, selectedChannelETA, and due-at-risk assessment used in the decision And if no eligible channel can meet complianceDueAt, the engine selects the fastest eligible channel and flags risk = "deadline-at-risk" in decisionPath
Expose chosen route and metadata to downstream orchestration
Given a finalized route decision for batch B777 When the decision is finalized Then the engine publishes a RouteSelected event to the event bus within 1 second including {routeType, batchId, payerId, idempotencyKey, cutoffTimestamp, priorityScore, fallback, overrideApplied, reasonCodes} And GET /routing/decisions?batchId=B777 returns the same values consistently And downstream orchestration acknowledgement is captured and appended to the decision log
Payer Profile & Routing Rules
"As a compliance lead, I want centralized payer profiles with routing rules so that we can reliably meet each payer’s protocol, cut-offs, and security requirements."
Description

Provides a centralized, versioned configuration for each payer, including allowed channels, endpoints, credentials, schema/companion guide references, batching constraints, throttling rates, time zone–aware cut-off windows, required acknowledgments, file naming conventions, and error taxonomies. Includes sandbox/production separation, change history, and guardrails for safe edits. Secrets are stored securely and tied to CarePulse RBAC, enabling operations to update profiles without code changes while ensuring compliance and consistency across all submissions.

Acceptance Criteria
Create and Update Payer Profile with Allowed Channels and Endpoints
Given an Operations Admin with edit permission opens the Payer Profile editor When they create a new payer profile specifying at least one allowed channel and its required endpoint fields Then the system validates channel-specific required fields and blocks Save with field-level errors if any are missing And when all required fields are valid and Save is clicked Then the profile is saved as a new version with a unique version ID and timestamp And the version is marked Active and is available to the routing engine within 60 seconds without a code deploy
Secrets Storage and RBAC-Controlled Access to Credentials
Given a user with Operations Admin role enters channel credentials for a payer profile When the credentials are saved Then secrets are stored in the platform secrets manager and are never retrievable in plaintext via UI or API And the UI displays masked values only and exposes rotate and test-connection actions Given a user with Viewer role opens the same profile When they view the credentials section Then credentials are fully redacted and rotate/test actions are disabled And all secret create/update/rotate/test actions are audit logged with user, timestamp, and profile version
Sandbox/Production Separation and Promotion Workflow
Given a payer profile has distinct Sandbox and Production configurations When a submission is executed in Sandbox mode Then only Sandbox endpoints and credentials are used and the submission is labeled Sandbox in logs and UI When a user with Approver role initiates Promote to Production for a specific profile version Then the system requires a change ticket ID and a second approver before activation And runs endpoint connectivity checks before marking the version Active for Production And all actions are recorded in the change history with diffs
Time Zone–Aware Cut-Off Windows Enforcement
Given a payer profile time zone is America/New_York and the EDI 837 cut-off window is 17:00–23:59 local time When a user attempts to submit at 16:59 local payer time Then the system blocks submission for that channel and displays the next available window start time When a submission is triggered at 17:01 local payer time Then the system allows the submission and timestamps all related logs in payer local time
Batching Constraints and Throttling Rate Applied at Runtime
Given batching is configured as max 500 claims per file and throttling is 10 files per minute for payer X When 1,200 claims are queued for payer X via the EDI channel Then the system creates exactly 3 files of sizes 500, 500, and 200 in that order And dispatches them at a rate not exceeding 10 files per minute And records the generated file names and counts in the submission log
Required Acknowledgments and File Naming Convention Compliance
Given a payer profile requires TA1 and 999 acknowledgments within 2 hours and a file naming pattern {payerId}_{yyyyMMdd}_{seq}.837 When a batch is submitted Then the outbound file name matches the configured pattern with a zero-padded daily sequence And the system tracks expected acknowledgments and marks the submission Pending until both are received or the SLA is exceeded And upon receipt, ack control numbers are linked to the submission record and surfaced in UI and API And if the SLA is exceeded, the system raises an alert and flags the submission Late-Ack
Error Taxonomy Mapping to Payer Rejection Codes
Given a payer profile defines mappings from raw payer rejection codes to standardized categories and subcategories When a rejection with raw code and description is ingested from an acknowledgment or portal response Then the system maps it to the configured category and subcategory And surfaces the normalized category in reports and APIs while preserving the raw code and description for audit And unmapped codes are flagged for configuration with a report listing frequency and first/last seen dates
EDI 837 Generation & Trading Partner Validation
"As a revenue cycle specialist, I want 837 files generated and validated against trading-partner rules so that submissions pass on the first attempt and reduce rework."
Description

Generates HIPAA-compliant 837P/837I payloads from CarePulse visit and documentation data, applying payer-specific companion guide rules and required loops/segments. Performs pre-submission validation (e.g., SNIP-level checks) to catch structural and content errors, normalizes identifiers (NPI, taxonomy, payer IDs), and supports test vs. production modes. Produces human-readable previews and hashes for integrity, and feeds validation results back to claim records to reduce rework and increase first-pass acceptance.

Acceptance Criteria
837P Claim Generation from Completed Visit
Given a completed visit with required patient, subscriber, rendering, billing, and service line data When claim generation is requested for a payer configured for 837P Then an X12 005010X222A1-compliant 837P file is produced with valid ISA/GS/ST and SE/GE/IEA envelopes and unique control numbers And each required loop/segment for professional claims (e.g., 1000A, 1000B, 2000A/B/C, 2300, 2400) is present and correctly populated from CarePulse data And the file passes structural validation with zero errors by an X12 parser (e.g., segment counts, element delimiters, and control numbers align) And the claim record is updated to reference the generated file ID and version
837I Claim Generation with Institutional Elements
Given a home-health institutional claim with appropriate facility, patient status, and revenue code data When claim generation is requested for a payer configured for 837I Then an X12 005010X223A2-compliant 837I file is produced with valid envelopes and unique control numbers And institutional-specific elements are populated (e.g., Type of Bill, patient status, statement dates, revenue codes, HCPCS where required) And the total billed amount equals the sum of all service line charges And the file passes structural validation with zero errors by an X12 parser And the claim record stores the claim type as 837I and references the file ID and version
Payer Companion Guide Rule Enforcement
Given a trading partner configured with companion guide rules (required/excluded loops, segments, qualifiers, element lengths, filename pattern) When an 837P or 837I file is generated for that payer Then payer-specific segments and qualifiers are applied (e.g., REF with payer-required qualifier, NM1/HI usage, claim frequency code) And elements violating payer-defined formats or lengths are flagged with deterministic error codes referencing the specific rule And prohibited segments per the guide are omitted And the output filename conforms to the payer’s required pattern And the claim is not queued for submission if any companion guide rule yields an error
SNIP Level 1–4 Pre-Submission Validation Gate
Given a batch of generated 837 files awaiting submission When SNIP Levels 1–4 validations are executed Then all structural (Type 1), inter-segment (Type 2), external code set (Type 3), and inter-segment situational (Type 4) errors are detected And each error includes loop:segment:element position, offending value, and SNIP category in the results And files with any SNIP error are blocked from submission and tagged as "Validation Failed" And files with zero SNIP errors are marked "Validation Passed" and eligible for routing And validation results are written back to associated claim records with timestamp and validator version
Identifier Normalization and Crosswalk Resolution
Given provider, organization, and payer identifiers present in CarePulse (NPI, taxonomy, submitter/receiver IDs, payer IDs) When an 837 is generated Then NPIs are normalized to 10-digit numeric without formatting and validated via checksum logic And taxonomy codes are normalized to 10-character format and validated against the latest code set And payer IDs and receiver IDs are resolved via the trading partner crosswalk; unknown mappings produce a blocking error And all normalized identifiers populate the correct loops/segments (e.g., NM1, REF, PER) per claim type And any identifier-related error is recorded on the claim with a clear fix-forward message
Test vs. Production Mode Separation
Given a trading partner with both test and production endpoints configured When the user selects Test mode for generation and submission Then ISA15 is set to "T" and test submitter/receiver IDs and endpoints are used And control numbers (ISA13/GS06/ST02) follow a test-specific sequence separate from production And files generated in Test mode are stored in a segregated namespace and are never routed to production channels And switching to Production sets ISA15 to "P" and uses production IDs/endpoints with production control number sequences
Auditability: Preview, Hash, and Claim Feedback
Given a generated 837 file When the user opens the preview Then a human-readable view displays key headers and each claim/service line with references to loop and segment identifiers And a SHA-256 hash of the exact X12 payload is computed, stored on the claim record, and shown in the UI And downloading the file yields a payload whose hash matches the stored value And any validation errors/warnings associated with this file version are listed on the claim record with severity, codes, and impacted fields
Universal Connectors (CSV, SFTP, API)
"As an integrations engineer, I want reusable CSV/SFTP/API connectors with field mapping so that we can onboard new payers quickly without custom code."
Description

Delivers reusable connectors to transform internal claim data into payer-specific CSV schemas or API payloads, manage schema versions, and handle secure transport (SFTP with key management, TLS, optional PGP encryption, OAuth2/API keys). Includes a mapping UI with templates, field-level validations, and test fixtures for quick onboarding of new payers without custom code. Connection health checks and retries are built in to maintain reliable, compliant transmissions.

Acceptance Criteria
CSV Mapping Template Creation and Validation
- Given the user selects a payer CSV template and uploads a 100-row internal sample, When required fields are mapped, Then 100% of missing required fields are flagged before save with row and column references. - Given field-level rules exist for type, length, and enumerations, When the sample is validated, Then all violations are listed with codes and counts, and the mapping cannot be saved until critical errors = 0. - Given the mapping has zero critical errors, When the user previews output, Then the CSV preview shows exact column order, delimiter, quoting, header setting, and filename pattern {payer}_{YYYYMMDD}_{batch}.csv per template. - Given the user saves the mapping, Then a new mapping version with semantic version, author, timestamp, and checksum is created and can be set as default. - Given the provided payer fixture is executed, Then validation completes under 30 seconds for up to 10,000 rows with 0 critical errors.
API Payload Construction and Schema Validation
- Given the user selects a payer API schema version and maps fields, When validating a 50-claim sample, Then the generated JSON payload conforms to the provided JSON Schema 100% (no schema errors) or returns precise error paths for non-conformant objects. - Given the payer API requires batching with a max batch size N, When sending, Then claims are segmented into batches size <= N while preserving claim order. - Given an idempotency key strategy is configured, When retries occur, Then no duplicate submissions are created at the payer (confirmed via 2xx response with duplicate indicator or no change in payer ticket ID). - Given the schema minor version changes, When the user attempts to reuse an existing mapping, Then a compatibility check runs and blocks activation if breaking changes are detected.
SFTP Connection, Key Management, and Atomic Uploads
- Given the user generates an SSH keypair in-app or uploads a public key, Then the private key is stored encrypted (AES-256) and is never displayed after creation. - Given hostname, port, username, and server host key fingerprint are provided, When Test Connection runs, Then the client verifies the host key match and establishes an SFTP session using SHA-2 MACs and modern ciphers; password auth is disabled. - Given a file upload starts, Then the file is uploaded to a temporary name and atomically renamed on completion; partial files are removed on failure. - Given a transfer finishes, Then the system records remote path, file size, checksum, and server confirmation, and marks the job Successful only if all checks pass.
OAuth2 and API Key Authentication with Robust Retries
- Given OAuth2 client credentials are configured, When requesting access, Then tokens are obtained and stored securely with expiry and refreshed with a 5-minute buffer; 401 responses trigger a single refresh-and-retry. - Given API key auth is configured, Then the API key is stored encrypted and sent only over TLS 1.2+ using the configured header/query location. - Given a 429 or 5xx response is received, When retry policy applies, Then exponential backoff with jitter is used (initial delay 2s, multiplier 2x, max delay 120s, max attempts 5) and Retry-After headers are honored. - Given retries are exhausted, Then the submission is marked Failed with last error code and response body snippet stored for diagnostics.
Schema Version Management and Safe Rollback
- Given a new payer schema version is imported, When compared to the active version, Then a diff lists added/removed/changed fields with compatibility classification (non-breaking/breaking). - Given a breaking change is detected, When upgrading, Then the migration wizard enforces remapping of affected fields and blocks publish until critical mappings are resolved. - Given a new mapping version is published, Then one-click rollback to the previous version is available and every publish/rollback is audit-logged with actor, time, and reason. - Given dual-run is enabled, When the next batch is processed, Then both versions run in parallel and a parity report shows >= 99.5% field-level match or flags discrepancies by field.
Validation Errors, User Feedback, and Fixtures
- Given the user runs validation on a mapped payer, Then error messages include field name, row index, error code, suggested fix, and sample value for each violation. - Given a payer-provided fixture is available, When executed, Then the system generates expected output files/payloads that match fixture checksums and payer sample acceptance rules. - Given a 10,000-row fixture is validated, Then validation completes in under 60 seconds with peak memory under 512 MB on standard deployment. - Given all critical errors are resolved, Then the UI enables Publish Mapping and the test status is recorded as Passed.
Optional PGP Encryption and Key Rotation
- Given PGP encryption is enabled for a payer, When preparing a file for transport, Then the file is encrypted with the payer’s public key (RSA 2048+ or ECC) and optionally signed with the sender’s private key; plaintext is not persisted to disk. - Given a payer public key will expire within 30 days, Then a warning alert is sent daily until rotation or deactivation. - Given a new PGP key is uploaded, When a canary test runs, Then encrypt/decrypt succeeds against the provided test keypair before activation; on failure, the old key remains active. - Given encryption is enabled, When a file is uploaded, Then the system verifies the remote file’s checksum matches the local encrypted checksum and records the key fingerprint used.
Guided Portal Upload & Proof Capture
"As a billing coordinator, I want a guided portal upload with proof capture so that non-automated submissions are timely and auditable."
Description

Provides a step-by-step workflow for payers that require manual portal submissions, pre-packaging files with correct names and metadata, showing tailored instructions, and capturing timestamped proof (confirmation numbers, screenshots, and upload receipts). Tracks completion status, associates evidence with the batch, and triggers reminders before cut-offs. This creates an auditable trail and reduces the risk of missed or late submissions when automation is not possible.

Acceptance Criteria
Pre-Packaged File Naming and Metadata
Given a payer configured for manual portal submission with a defined file-naming convention and required metadata fields When the user initiates "Prepare Portal Package" for a selected batch Then the system generates an export whose file names match the configured pattern exactly, including payer ID, batch ID, YYYYMMDD date, and sequence if split And the package includes a manifest containing all required metadata populated from the batch And the system validates total size and per-file size against payer limits and blocks package creation with a clear error if limits would be exceeded And a checksum is produced for each file and stored for audit
Payer-Specific Guided Steps and Portal Link
Given a payer profile contains a portal URL, step-by-step instructions, and a cut-off time When the user starts the Guided Portal Upload wizard for a batch Then the wizard displays the payer’s portal URL, the live cut-off time in the payer’s timezone, and the number of files to upload And instructions are segmented into numbered steps with checkboxes for completion And the Next/Complete actions remain disabled until all required steps on the current screen are checked And the instructions version is displayed and logged upon wizard start
Proof Capture: Confirmation Numbers and Screenshots
Given the user has completed the upload in the external payer portal When the user records proof in the wizard Then the system requires at least one of: a confirmation number entry, a receipt file upload (PDF/PNG/JPG), or a screenshot upload And any entered confirmation number must match the payer’s configured regex format And uploaded files are validated for MIME type and limited to 20 MB each And the system timestamps proof in UTC and payer local time And OCR, when enabled, extracts a confirmation number from uploaded artifacts and flags mismatches for review
Evidence Association and Immutability
Given proof has been saved for a batch When the batch record is viewed or exported for audit Then all proof artifacts are associated to the batch ID, payer, submitting user, and timestamps And artifacts are read-only; any edits create a new version with previous versions retained And deletions are restricted to Compliance Admins and are soft-deleted with an audit log entry including who, when, and reason
Submission Status Update and Activity Log
Given all required proof has been captured in the wizard When the user completes the submission step Then the batch status updates to "Submitted (Portal)" with a submission timestamp And an activity log entry is created containing user ID, device, IP, and a proof summary (types captured, confirmation number if present) And the batch appears in dashboards and exports with the new status within 60 seconds
Cut-Off Reminders and Escalation
Given a batch is pending submission for a payer with a defined cut-off When it is 2 hours before the cut-off and the batch status is not "Submitted (Portal)" Then the system sends a reminder to the assigned owner via in-app notification and email And if still not submitted at cut-off, a second reminder is sent, the batch is flagged "At Risk", and a supervisor is notified And reminders are suppressed automatically once the batch is marked submitted And all timing respects the payer’s timezone and daylight saving rules
Portal Limits: Batch Splitting and Tracking
Given a payer portal enforces a maximum N files per upload or M MB per upload When a batch exceeds these limits during package preparation Then the system proposes a split into sequenced sub-batches that each comply with limits And each sub-batch is assigned a sequence suffix (e.g., -01, -02) and requires its own proof capture And the parent batch shows aggregate completion only when all sub-batches are submitted with proof
Batch Scheduler with Throttling & Cut-off Awareness
"As a scheduling lead, I want batch orchestration with throttling and cut-off awareness so that large claim volumes are sent on time without hitting rate limits."
Description

Implements a queue-based orchestrator that groups claims by payer and channel, computes optimal batch sizes, and enforces rate limits and concurrency to avoid rejections. Schedules submissions around payer cut-off windows with time zone awareness and predictive backlog warnings. Supports pause/resume, deduplication, and idempotent replays, and surfaces schedule commitments to CarePulse dashboards so teams can proactively manage high-volume days.

Acceptance Criteria
Queue Orchestration by Payer and Channel
Given claims for multiple payers and channels are enqueued, when the orchestrator groups them, then each produced batch contains claims from exactly one payer and one channel Given claims are enqueued with createdAt timestamps, when batches are formed, then claim order within a payer-channel queue is FIFO by createdAt Given a claim lacks a resolvable payer or channel configuration, when grouping runs, then the claim is not batched and is flagged with error code CP-RTE-001 and a remediation hint Given a claim is re-queued after a transient failure, when regrouping occurs, then it returns to the same payer-channel queue without duplicating it in any other queue
Dynamic Batch Size Computation
Given a payer profile sets maxItems=500 and maxPayloadBytes=4MB, when 1,200 claims are ready for the same payer-channel, then the system produces three batches (500, 500, 200) and no batch exceeds 4MB Given claim payload sizes vary such that payload size becomes the limiting factor, when batches are computed, then no batch exceeds maxPayloadBytes and each batch contains the maximum number of claims possible without violating limits Given a channel-specific override (e.g., API channel maxItems=200), when batches are computed for that channel, then the override is honored over the payer default Given configuration changes are applied, when the next computation runs, then subsequent batches reflect the new limits without altering already-dispatched batches
Rate Limits and Concurrency Enforcement
Given rateLimitRps=20 and maxConcurrency=3 for a payer-channel, when dispatching 1,000 claims, then observed send rate per 1-second window never exceeds 20 and active concurrent sends never exceed 3 Given the remote endpoint responds with 429 or 503, when retries are attempted, then exponential backoff with jitter is applied up to maxRetries and no retry is initiated that would breach rateLimitRps or maxConcurrency Given rate limiting is active, when workload spikes, then the system queues overflow without dropping claims and logs a CP-RTE-THROTTLED event with current queue depth and estimated time to drain
Cut-off Window and Time Zone Awareness
Given a payer cut-off of 5:00 PM America/New_York and the batch becomes ready at 4:45 PM ET on a DST-observed day, when scheduling, then the batch is prioritized and submitted before 5:00 PM ET Given a batch becomes ready at 5:01 PM ET, when scheduling, then it is deferred to the next configured submission window and a committed send time is recorded Given the user’s agency time zone is America/Chicago and the payer time zone is America/New_York, when times are shown in the dashboard, then user-facing times are displayed in the agency time zone with TZ labels, while scheduling uses the payer time zone Given a payer-configured blackout window overlaps with the current time, when scheduling, then no submissions are scheduled within the blackout window
Predictive Backlog Warnings Before Cut-off
Given current queue depth, effective throughput (respecting rate limits), and remaining time to cut-off, when the estimated completion time exceeds the cut-off by 15 minutes or more, then an At-Risk warning is raised at least 30 minutes before cut-off Given an At-Risk warning is raised, when conditions improve such that estimated completion is back before cut-off, then the warning auto-clears within 2 minutes and the status returns to On Track Given an At-Risk or Missed warning is active, when viewing the dashboard, then payer, channel, estimated completion time, and contributing factors (queue depth, effective rps) are displayed
Pause, Resume, and Idempotent Replay with Deduplication
Given a user pauses a payer-channel queue, when in-flight sends complete, then no new dispatches start and the queue state is Paused Given a paused queue is resumed, when processing continues, then ordering within the payer-channel queue is preserved and no claims are skipped Given a transient transport failure occurs, when a batch is retried, then the same idempotency key is reused and the payer does not receive duplicate claims Given a claim with the same claimId is encountered during replay, when deduplication runs, then the duplicate is not sent and an audit entry with reason Duplicate-Suppressed is recorded Given a replay from checkpoint T is initiated, when processing, then only claims lacking confirmed delivery after T are resent
Dashboard Schedule Commitments and Status Visibility
Given batches are scheduled, when viewing the dashboard, then for each payer-channel the next committed send time, queued batch count, estimated time to empty, cut-off time, and risk status are displayed Given the scheduler updates a commitment due to rate change, pause, or cut-off, when viewing the dashboard, then the display updates within 10 seconds of the change Given a batch transitions states (Queued → Sending → Confirmed/Failed), when viewing details, then batch ID, claim count, last update time, and latest confirmation code are shown
Acknowledgment Parsing, Retries & Exception Handling
"As a billing analyst, I want automated acknowledgment parsing and retries so that we catch errors early, resubmit when appropriate, and avoid late-file penalties."
Description

Continuously ingests and parses acknowledgments (e.g., 999/277CA, CSV receipts, API responses), correlates them to batches and individual claims, and updates statuses through a clear submission lifecycle. Implements automatic retries with exponential backoff for transient faults and routes hard rejections to a work queue with root-cause details and suggested fixes. Provides alerts for cut-off risk, aggregates KPIs (first-pass acceptance, rejection rate, mean time to resubmit), and writes an auditable timeline for compliance reporting.

Acceptance Criteria
EDI 999/277CA Parsing and Status Lifecycle Update
Given a valid 999 references a known batch control number, When the file is ingested, Then the system parses it, correlates it to the batch, updates the batch status to Acknowledged or Rejected as indicated, timestamps the update, and stores the raw interchange with checksum. Given a valid 277CA contains claim-level statuses for a previously submitted batch, When the file is ingested, Then the system maps each claim’s STC/AAA codes to canonical statuses (Accepted, Rejected, Pending) using the configured mapping, updates each claim, captures payer TRN/ICN where present, and records the mapping version used. Given an acknowledgment references an unknown claim or is a duplicate (same control numbers), When ingested, Then the system flags the item as Unmatched or Duplicate without regressing any existing statuses and creates an exception entry for review.
CSV and API Receipt Normalization to Canonical Statuses
Given a CSV receipt meets the configured schema, When uploaded via SFTP or portal, Then the system validates the schema, rejects files with schema errors with a descriptive exception, and for valid files maps each record to canonical batch/claim statuses with field-level provenance. Given a successful API response returns batch and claim statuses, When received, Then the system normalizes the payload to canonical statuses identical to EDI mappings, persists the raw payload, and correlates to batch and claim IDs. Given a partial receipt contains invalid and valid records, When processed, Then valid records are applied, invalid records are logged as exceptions with line numbers, and the overall batch status reflects the aggregate of applied records.
Automatic Retries with Exponential Backoff for Transient Faults
Given a transient fault (HTTP 429/5xx, network timeout, temporary SFTP error) occurs during submission or acknowledgment retrieval, When detected, Then the system schedules retries with exponential backoff and jitter starting at 30 seconds and doubling up to a maximum of 5 attempts or 30 minutes total, whichever is reached first. Given a retry subsequently succeeds, When the acknowledgment is retrieved or submission confirmed, Then the system updates the lifecycle status accordingly, marks prior attempts in the timeline, and clears the retry schedule. Given retries are exhausted without success, When the cap is reached, Then the system creates a work-queue item with category Retry Exhausted, includes last error details, and sends a notification to the owning team.
Hard Rejection Routing with Root Cause and Suggested Fix
Given a claim or batch is rejected with actionable reason codes (e.g., AAA03=15, STC*R), When parsed, Then the system classifies the root cause (Eligibility, Patient Demographics, Coding, Payer Enrollment, Format) using maintained rules, attaches the raw codes, and creates a work item with priority based on payer cut-off proximity. Given a work item is created, When viewed, Then it presents a suggested fix derived from the rule, links to the source encounter, and supports one-click resubmit after correction with validations. Given a corrected claim is resubmitted, When accepted on the next acknowledgment, Then the work item auto-resolves and the timeline records the resolution with user and timestamp.
Cut-off Risk Prediction and Alerts
Given payer cut-off windows and current queue throughput are configured, When the system projects that unsubmitted or retrying batches are unlikely to meet cut-off within the next 2 hours, Then it raises a Cut-off Risk alert with severity based on predicted lateness and sends notifications via in-app and email. Given conditions improve and risk drops below threshold for 15 minutes, When evaluated, Then the system auto-downgrades or clears the alert and records the change. Given repeated risk conditions within a 60-minute window, When detected, Then the system de-duplicates alerts and updates the existing alert’s timestamp and details instead of creating new alerts.
KPI Aggregation and Exposure
Given ongoing submissions and acknowledgments, When daily processing completes, Then the system computes KPIs: First-Pass Acceptance Rate, Rejection Rate, Mean Time to Resubmit, and Acceptance After Resubmit per payer and overall for rolling 7/30 days and month-to-date. Given a user with reporting access requests KPIs, When viewing the dashboard or exporting CSV, Then the metrics reflect the latest data within 5 minutes, include definitions and filters (payer, date range), and totals reconcile to counts of batches/claims. Given data backfill or replay occurs, When completed, Then KPI time series are recomputed to maintain consistency, and a data-refresh event is logged in the audit timeline.
Auditable Timeline and Provenance
Given any submission, acknowledgment, retry, status change, user correction, or alert event, When it occurs, Then an immutable timeline entry is written with timestamp (UTC), actor/system, event type, source channel, related IDs (batch, claim), and payload checksum, and cannot be edited post-write. Given an auditor exports a claim or batch history, When requested, Then the system produces a chronological timeline with all related events, raw files/JSON attachments, and status transitions suitable for compliance reporting and includes a verification hash. Given duplicate or out-of-order acknowledgments arrive, When processed, Then the timeline records them as duplicates or late-arriving without altering the final status.

Receipt Loop

End-to-end status tracking that ingests payer acknowledgments and portal receipts (e.g., 999/277CA for EDI) and translates them into clear states: Accepted, Pending, or Rejected with reason codes. Links directly back to the export and the exact records to fix, enabling quick corrections and re-submission.

Requirements

EDI Acknowledgment Ingestion
"As an operations manager, I want payer acknowledgments automatically ingested and parsed so that I can see current claim and visit status without manually checking multiple systems."
Description

Implement a secure, scalable ingestion pipeline for payer acknowledgments (e.g., X12 999 and 277CA) via AS2 and SFTP, with support for PGP encryption, certificate rotation, and idempotent processing. Parse and validate ISA/GS/ST envelopes, correlate acknowledgments to outgoing submissions using control numbers, deduplicate files, and extract per-claim/visit statuses and reason codes. Persist both raw files and normalized artifacts, handle TA1/interchange errors, and expose processing health metrics, retries, and alertable failures. Deliver near real-time updates to downstream Receipt Loop components while maintaining HIPAA-compliant handling of PHI.

Acceptance Criteria
Secure AS2/SFTP Ingestion with PGP and MDN
Given payer partners are configured with AS2 identifiers, endpoints, certificates, SFTP credentials, and PGP keys When a 999 or 277CA acknowledgment is received via AS2 over TLS 1.2+ with a PGP-encrypted payload, or deposited to the SFTP inbox as a PGP-encrypted file Then the system verifies the AS2 signature, validates the certificate chain and partner identity, decrypts the payload using the configured private key, and enqueues the file for processing And on AS2, a signed MDN is returned for success or failure within 10 seconds And if signature verification, certificate validation, or decryption fails, the file is quarantined, no PHI is logged, and an alert is emitted within 2 minutes And the time from transport receipt to enqueue for processing is ≤ 5 seconds at p95 under nominal load
Idempotent Processing and Deduplication
Given files may be retried or resent by payers or transport layers When a file with the same content hash or the same ISA13/GS06/ST02 tuple is received within a 30-day idempotency window Then the system marks it as a duplicate and does not re-process or emit duplicate downstream updates And an idempotency record is persisted with first-seen timestamp and original processing outcome And if a later file reuses control numbers but differs in content, it is quarantined as a conflict and an alert is raised And downstream Receipt Loop side effects are exactly-once with respect to the original file
Envelope Parsing and TA1/999 Validation
Given a decrypted X12 acknowledgment file (TA1 or 999 005010X231A1) When parsing ISA/GS/ST envelopes and control segments Then ISA/GS/ST control numbers, counts, and required fields are validated; structural errors generate a TA1-based "Rejected: Interchange" outcome with codes persisted And 999 AK2/AK5/AK9 segments are parsed to determine transaction and functional group acceptance (Accepted, Accepted with Errors, Rejected) with codes normalized And parse/transient failures are retried up to 3 times with exponential backoff; after final failure the file is sent to a dead-letter queue with redacted diagnostics
Correlation of 999/277CA to Outgoing Submissions
Given outgoing submissions are stored with ISA13, GS06, ST02, and per-claim/visit identifiers When a 999 or 277CA acknowledgment arrives Then the system correlates acknowledgments to the originating batch at interchange, group, and transaction levels via control numbers And for 277CA, per-claim/visit statuses and reason codes are extracted (e.g., STC) and mapped back to the exact records to update And acknowledgments with no matching submission are placed in an "Unmatched" queue with an alert emitted within 1 minute And correlation accuracy is ≥ 99.9% on seeded test datasets with known ground truth
HIPAA-Compliant Persistence of Raw and Normalized Artifacts
Given secure storage is configured for acknowledgment artifacts When an acknowledgment is processed Then the original received payload (encrypted if applicable) and the decrypted X12 content are persisted with immutable checksums And a normalized artifact is stored per batch and per claim/visit, including mapped statuses and reason codes And all artifacts are encrypted at rest (AES-256) and access is restricted to authorized service roles with audited access logs; PHI is redacted from application logs And retention is configurable with a default ≥ 6 years; deletion follows secure wipe procedures
Processing Metrics, Retries, Alerts, and Near Real-Time Delivery
Given system monitoring and alerting are enabled When acknowledgments are ingested and processed under nominal load (up to 100 files/min) Then p95 end-to-end latency from receipt to downstream Receipt Loop update is ≤ 60 seconds, and p99 ≤ 5 minutes And metrics are emitted for ingest_latency, queue_depth, processed_count, decrypt_failures, parse_failures, unmatched_count, retry_count, dlq_count, and per-ack-type counts And transient failures are retried up to 3 times with exponential backoff (max 15 minutes); after final failure the item is moved to a dead-letter queue and an alert is sent within 2 minutes And a health endpoint exposes current backlog, last-processed timestamp, and error rates with HTTP 200 when within SLOs
Certificate and Key Rotation with Zero Downtime
Given partners rotate AS2 certificates and PGP keys on a scheduled cadence When new certificates/keys are added with an activation window overlapping the old ones Then messages signed/encrypted with either old or new material are accepted during the overlap without downtime or message loss And after the cutover time, messages using expired/retired material are rejected with a specific error and alert And expiration alerts are emitted 30, 7, and 1 days prior to certificate/key expiry; all rotation events are auditable with actor, timestamp, and previous/new thumbprints
Payer Portal Receipt Connectors
"As an operations manager, I want receipts automatically pulled from payer portals so that our status view is complete even when EDI isn’t available."
Description

Provide a connector framework to retrieve receipts and acknowledgments from payer portals that lack EDI, using native APIs when available or secure headless browser automation where permitted. Store credentials in an encrypted vault, support MFA flows (e.g., OTP/app prompts) through delegated tokens, schedule polling with exponential backoff, and ingest email or PDF receipts via a secure parsing pipeline (including OCR when necessary). Normalize retrieved artifacts to the same schema as EDI, respect portal terms of use, and provide robust error handling, fallbacks, and observability for connector health.

Acceptance Criteria
API Connector Auth & Receipt Retrieval
Given a payer portal offers a native API and valid credentials are stored in the encrypted vault When a sync (scheduled or manual) is initiated Then the connector authenticates via the portal API using the stored credentials without exposing secrets in logs And if MFA is required, the connector obtains and uses a delegated token to complete the flow within 60 seconds And the connector retrieves all new receipts/acknowledgments since the last successful sync (delta sync) for the configured time window And the connector stores the raw API responses and artifacts with timestamps, source IDs, and correlation IDs And the sync is idempotent: re-running for the same window produces no duplicate artifacts
Headless Browser Automation with MFA
Given a payer portal lacks a usable API and its Terms of Use permit automated access When the connector runs in headless mode Then it logs in using credentials from the encrypted vault without rendering secrets to stdout/stderr And it completes MFA using a delegated OTP/app prompt token flow with a retry up to 3 times And it navigates to the receipts/acknowledgments area and downloads available artifacts (PDF/CSV/HTML) for the delta window And it throttles requests to stay below portal rate/interaction limits and respects robots/ToU constraints And screenshots and DOM snapshots are captured only for troubleshooting with sensitive fields masked And downloaded artifacts are stored with hashes for integrity and deduplication
Encrypted Credential Vault & Rotation
Given connector credentials, session cookies, and tokens must be stored When secrets are persisted Then they are encrypted at rest using envelope encryption with a managed KMS and AES-256-GCM And access is restricted by least-privilege policies and audited with user/connector identity and timestamp And secrets are never logged; any display is masked by default And the system supports credential rotation without downtime, with automatic invalidation of old sessions within 5 minutes And unsuccessful auth attempts beyond a threshold lock the connector and notify admins
Polling Scheduler with Exponential Backoff
Given each connector has a configurable polling cadence and window When a run succeeds Then the next run is scheduled based on the configured interval and last successful watermark When a run encounters a transient error (HTTP 5xx, timeouts, rate limit) Then exponential backoff with jitter is applied with a maximum backoff ceiling and capped retries per run And the scheduler ensures only one active run per connector (no overlap), with queueing for manual triggers And rate limiting is enforced per-portal to remain under documented thresholds And operators can pause/resume a connector; paused connectors do not execute until resumed
Email/PDF Receipt Ingestion with OCR
Given a secure mailbox is configured with OAuth2 read-only access and allowed sender domains When new messages arrive matching configured subjects/senders Then attachments (PDF/CSV/TXT) and inline content are ingested, checksummed, and deduplicated by message-id+hash And PDFs/images are processed via OCR when text extraction fails, achieving ≥ 95% field extraction accuracy on test set And raw emails and artifacts are stored immutably with metadata (sender, received time, DKIM/SPF result) And parsing failures route messages to a quarantine queue with a structured error and alert to operators within 5 minutes
Normalization to Unified Receipt Schema
Given heterogeneous artifacts (API, browser downloads, emails, PDFs) are ingested When normalization runs Then each artifact is mapped to the unified receipt schema used for EDI (e.g., 999/277CA parity) And each normalized record has a definitive state in {Accepted, Pending, Rejected} with payer/source reason codes And each record links to the originating export batch and the exact claim/visit/record IDs for correction And required fields (payer, submission date, artifact source, correlation ID, reason code when Rejected) are validated; invalid records are flagged and not published And the normalized payload preserves a reference to the original artifact for audit
Connector Health, Errors, and Alerts
Given connectors operate continuously When observing runtime metrics Then a health dashboard shows per-connector: last success time, lag (minutes), success rate (24h), error categories, and next scheduled run And structured logs include correlation IDs and exclude PII/PHI by default And alerts are sent on consecutive failures ≥ N, auth/MFA expirations, or lag exceeding SLA (e.g., 95% of receipts within 60 minutes of availability) And on hard failures of the primary path, a permitted fallback (API↔headless or email) is attempted once and the reason is recorded And error handling classifies errors (auth, MFA, parsing, rate-limit, ToU) and recommends next actions
Status Normalization & Code Mapping
"As a billing specialist, I want clear, consistent statuses with meaningful reason codes so that I can understand issues quickly and take the correct action."
Description

Translate heterogeneous acknowledgment payloads into a canonical state model of Accepted, Pending, or Rejected, enriching each item with payer-specific reason codes and human-readable guidance. Maintain versioned mapping tables per payer, support rapid updates to new codes, and record original raw codes for traceability. Aggregate multi-stage acknowledgments into the most relevant current state while surfacing timestamps and sources. Provide a configuration UI/service to manage mappings and display concise, actionable messages that indicate likely fixes.

Acceptance Criteria
Normalize Heterogeneous Acknowledgments to Canonical States
Given acknowledgment payloads from EDI 999, 277CA, payer portals, and custom APIs across multiple payers When the system ingests a file or message and processes normalization Then each referenced claim/visit is assigned exactly one canonical state in {Accepted, Pending, Rejected} using the active payer-specific mapping at ingestion time And the normalized record includes payer_id, original_reason_code(s), source_type, mapping_version, normalized_state, guidance_text, and effective_timestamp And items with multiple reason codes map to the highest-severity resulting state where severity order is Rejected > Pending > Accepted And items with no applicable mapping are flagged and handled per the unknown code policy
Aggregate Multi-Stage Acknowledgments Into Current State
Given multiple acknowledgment events for the same submission key (payer_id, export_id, claim/visit_id) When events are ingested potentially out of order Then the current_state equals the highest-severity state among events with the latest effective_timestamp (severity: Rejected > Pending > Accepted) And effective_timestamp uses the payer-provided event time when available; otherwise the system receipt time And the UI shows current_state, effective_timestamp, and source for that state And the system maintains a complete, immutable history of all events with raw codes and sources And processing is idempotent; re-ingesting identical events does not create duplicates or change current_state
Persist Raw Codes and Mapping Traceability
Given any normalized item Then the system persists original raw code(s), raw message snippet (up to 4 KB), source document/type (e.g., 999, 277CA, portal), mapping_table_id and mapping_version, normalization_run_id, and user/service actor And these fields are retrievable via API and UI audit view by authorized users And exports and audit-ready reports include either these fields or a stable reference And historical items keep their original mapping_version; later mapping edits do not change past normalized states unless an explicit reprocess job is executed and recorded
Versioned Payer Mapping Management
Given a payer mapping table When an admin creates or edits a mapping entry via the configuration UI/service Then validation enforces uniqueness of (payer_id, code), required canonical_state in {Accepted, Pending, Rejected}, non-empty guidance_text <= 200 chars, and effective_at timestamp And the change is saved as a new version with version_id, author, timestamp, change_summary, and effective_at And admins can preview the impact on a provided sample payload before publishing And admins can activate the new version immediately or schedule via effective_at And admins can roll back to any prior version; the system records the rollback as a new version And the active version resolved at runtime is the latest version where effective_at <= now
Rapid Adoption of New/Unknown Codes
Given an unmapped payer code is encountered during ingestion When normalization runs Then the item is classified as Pending with guidance "Unknown payer code <code>—mapping required" and an admin alert is created with payer_id, code, and sample And ingestion continues; the batch completes without failure due to unmapped codes And after an admin publishes a new mapping for that code, subsequent ingestions classify the code per the mapping without redeploy And an admin can trigger reprocessing for previously affected items by payer_id, code, and date range; reprocessing records a new normalization_run_id and preserves prior history
Actionable Guidance and Remediation Links
Given a normalized item with state Pending or Rejected When displayed in the UI or retrieved via API Then the item includes a concise guidance_text (<= 200 chars) specific to the payer/code with likely fix And a deep link to the originating export and to the exact record(s) requiring correction is present and functional And if the system supports resubmission, a "Prepare Resubmission" action is available that loads the corrected context without leaving the page And messages contain no PHI beyond what is already visible in the current context and are safe for audit export
Submission-Record Traceability Links
"As a CarePulse user, I want to jump from a status or rejection directly to the exact record to fix so that I can resolve issues without hunting through batches."
Description

Establish end-to-end linkage from each acknowledgment back to the originating export batch and the exact records (claims/visits) it affects. Persist identifiers such as file names, batch IDs, control numbers, and segment references to enable deep links to the source record and field-level context. Support one-to-many and many-to-one relationships across batches and corrections, enforce role-based access, and surface lineage in the UI so users can navigate from a rejection directly to the item requiring attention.

Acceptance Criteria
Deep Link from Acknowledgment to Export Batch
Given an ingested acknowledgment (999 or 277CA) containing file_name, batch_id, and control numbers (ISA13, GS06, ST02) When a user opens the acknowledgment detail view Then the UI displays a clickable link to the originating export batch that matches these identifiers And when the link is clicked by an authorized user Then the batch detail page loads within 2 seconds at P95 and displays the same identifiers for verification And when an unauthorized user attempts to open the link Then access is denied (HTTP 403 equivalent in-app), no PHI is displayed, and the event is audit-logged
Navigate from Rejection to Affected Records
Given a 277CA with claim-level or service-level rejections referencing AK2/IK3/IK4 segments When the user selects "View affected records" Then the system lists all impacted claim/visit records with record ID, patient name, and date of service And then the count of listed records equals the number of unique references parsed from the acknowledgment And then each listed record includes a direct link to the record detail that opens in a new tab with the acknowledgment context token attached And when any link is clicked Then the target record page loads within 2 seconds at P95 and displays a banner with the rejection reason codes and human-readable messages
Persist and Display Lineage Identifiers
Given any acknowledgment or export batch is processed Then the system persists file_name, batch_id, ISA13, GS06, ST02, and segment references (AK2, IK3, IK4) in the lineage store And when viewing any related export batch, acknowledgment, claim, or visit record Then a read-only "Lineage" panel displays these identifiers consistently and matches stored values And then all lineage fields are included in API responses and are filterable by batch_id and control numbers And then lineage data survives system restarts and is retrievable for at least 2 years
Cross-Batch One-to-Many and Many-to-One Mapping
Given a claim has been exported multiple times across different batches due to corrections When viewing the claim's lineage Then all associated batches and acknowledgments are displayed in chronological order with statuses Accepted, Pending, or Rejected And then the UI indicates cardinality (e.g., 1→N, M→1) for relationships without duplication And then counts in UI match the underlying relationship records with zero discrepancy And when a previous batch is superseded Then links to superseded items remain accessible and labeled as "Superseded"
Role-Based Access Control on Traceability Links
Given role assignments (Admin, Operations Manager, Biller, Caregiver) exist When accessing deep links Then Admin, Operations Manager, and Biller roles can navigate to batch and record details; Caregiver cannot And then unauthorized access attempts are blocked with a non-PHI message and create an audit log entry with user_id, target_type, target_id, timestamp, and outcome And then links include only least-privilege query parameters (no PHI), and any context token expires after 30 minutes of inactivity
Field-Level Context and Quick Correction
Given a rejection with IK3/IK4 indicating a specific element (e.g., subscriber ID) When the user opens the source record via the deep link Then the corresponding field is highlighted and the reason codes with human-readable guidance are shown inline And when the user has edit permission and clicks "Fix and resubmit" Then the edit form opens with the field focused and upon save, a resubmission job is queued with the corrected record ID and the lineage maintains the new batch linkage And then the user is returned to the acknowledgment detail with the item marked as "Correction queued"
Quick Correction & Resubmission
"As an operations manager, I want to correct and resubmit rejected items in a few clicks so that reimbursements aren’t delayed and compliance is maintained."
Description

Enable an accelerated correction flow that can be launched from a rejected item, pre-populating relevant fields, validating edits against payer and EDI rules, and saving changes as a new version. Regenerate the export payload for the corrected records, prevent duplicate submissions, and route the resubmission to the correct destination with proper control numbers. Track resubmission attempts and outcomes, support optional approval steps, and update Receipt Loop statuses automatically upon new acknowledgments while preserving a complete history.

Acceptance Criteria
Launch Correction Flow from Rejected Item
Given a user is viewing a Receipt Loop item with status Rejected and visible reason codes When the user clicks "Correct" Then the correction form opens within 2 seconds And all editable fields are pre-populated from the rejected submission And payer, trading partner, and control numbers from the rejected attempt are displayed as read-only metadata And a link back to the source export and the exact impacted records is present And only the affected records are included in the correction context
Payer and EDI Rule Validation on Edit and Submit
Given the user edits one or more fields in the correction form When the user attempts to Save Draft Then client-side validations flag missing/invalid required fields with field-level messages And rule references (payer rule ID or X12 segment/element) are displayed for each error Given a draft exists and the user clicks Submit When server-side validation runs Then the submission is blocked until all hard errors are resolved (0 blocking errors) And warnings (non-blocking) are shown with codes and may be overridden with explicit acknowledgment And the validated ruleset version and timestamp are recorded with the submission
Immutable Versioning and Audit History
Given a rejected item is corrected When the user saves changes (draft or submitted) Then a new version number is assigned (vN+1) And the prior version remains read-only and accessible And an audit log entry records user ID, timestamp, changed fields (field-level diff), and justification note (if provided) And the version history shows a linked chain from original submission to current version
Export Regeneration for Corrected Records Only
Given a correction involves one or more records within an export batch When the user submits the correction Then the system regenerates an export payload containing only the corrected records And the payload passes schema validation for the target format (e.g., X12 837/275 or portal CSV) And the regenerated payload is stored with a unique artifact ID and is downloadable And the count of records in the payload equals the number of corrected records
Duplicate Submission Prevention and Control Numbers
Given a record has previously been submitted to a payer When a resubmission is attempted with no material content changes Then the system blocks the submission and displays a duplicate warning referencing the prior attempt ID Given a valid correction with material changes When the payload is generated Then ISA13/GS06/ST02 control numbers are unique and sequential per trading partner configuration And an internal submission hash differs from prior attempts for the same record And the system records the original payer claim control number (if available) for cross-reference
Approval Gating and Correct Routing
Given the organization policy "Require Approval for Resubmission" is enabled When a corrected draft is ready Then the Submit button is disabled until an approver with the correct role approves And the approval captures approver ID, timestamp, and comment Given a submission is approved (or approval not required) When the system dispatches the payload Then it routes to the payer/trading partner mapped for the claim(s) using the configured transport (e.g., SFTP, AS2, or portal upload API) And dispatch metadata includes destination ID, transport, and correlation IDs
Acknowledgment Handling and Status Auto-Update
Given a resubmission has been dispatched When a 999 acknowledgment is received Then the Receipt Loop status updates to Pending if syntactically accepted (AK9=A) And updates to Rejected with 999 error codes if rejected (AK9=R/E) Given a 277CA or portal receipt is received When claim-level statuses are parsed Then each record updates to Accepted or Rejected with payer reason codes And the attempt history increments with outcome, timestamps, and links to raw acks And users are notified per notification settings for status changes
Real-time Alerts & Status Dashboard
"As a manager, I want real-time alerts and a consolidated dashboard so that I can prioritize follow-ups and keep visits and billing on track."
Description

Provide a dashboard that summarizes Accepted, Pending, and Rejected counts by payer, batch, and time range, with filters, trends, and drill-down to affected records. Trigger configurable email and mobile push notifications for new rejections, prolonged pending states, and connector/ingestion failures. Include SLA timers, aging indicators, CSV export, and multi-tenant scoping, ensuring time zone awareness and accessibility on mobile devices to align with CarePulse’s mobile-first approach.

Acceptance Criteria
Dashboard Aggregation by Payer, Batch, and Time Range
Given receipt data exists across multiple payers and batches with Accepted, Pending, and Rejected statuses within a known time range When the dashboard loads with the default time range (Last 7 Days) Then the total Accepted, Pending, and Rejected counts match the seeded data for that range And the dashboard renders within 2 seconds on a broadband connection When the user toggles grouping to Payer Then per-payer Accepted, Pending, and Rejected counts are displayed and sum to the time-range totals When the user toggles grouping to Batch Then per-batch Accepted, Pending, and Rejected counts are displayed and sum to the time-range totals When the user changes the time range (relative preset and absolute dates) Then the counts refresh to reflect the new range within 2 seconds And new receipts ingested are reflected on the dashboard within 60 seconds of ingestion completion
Filtering and Trend Visualization
Given filters for Payer, Batch, Status, and Date Range are available When any single filter is applied Then dashboard counts, charts, and lists include only matching data When multiple filters are combined Then results reflect the logical AND of selections When Trend view is set to Daily Then a chart shows daily counts of Accepted, Pending, and Rejected with correct rollups per day When Trend view is set to Weekly Then the chart aggregates by ISO week with correct week start per tenant time zone When the user hovers or taps a data point Then an exact count and date/interval label is displayed via tooltip/accessibility announcement When the user clears all filters Then the dashboard returns to default filter values within 1 second
Drill-Down to Affected Records and Source Export
Given the user clicks a Rejected count on the dashboard When the drill-down opens Then a paginated list displays the affected records with columns: Payer, Batch, Record ID, Status, Reason Code(s), Timestamp, SLA Age And each row includes a deep link to the record detail page and to the originating export artifact When the user clicks the record link Then the record detail loads with rejection reason(s) pre-populated for correction within 2 seconds When the user clicks the export link Then the associated export file or portal artifact opens/downloads successfully When the user searches within the drill-down Then results filter in real time (≤300 ms keystroke lag) and the URL reflects the applied filters for shareable deep links
Real-time Rejection Notifications (Email & Push)
Given tenant- and user-level notification preferences are configured When a new rejection event is ingested for any payer within the user’s scope Then an email and a mobile push notification are delivered within 60 seconds containing payer, batch, count, top 3 reason codes, and a deep link to the drill-down When the user sets frequency to Hourly digest Then notifications batch events and send at the top of the next hour with aggregated counts and unique reason codes When the user sets frequency to Daily digest Then notifications send at the configured local time with prior-day aggregates When the user opts out of a payer or of Rejected status Then no notifications are sent for those selections And duplicate suppression batches multiple events occurring within a 5-minute window into a single notification
Prolonged Pending SLA Alerts
Given a per-payer Pending SLA threshold (e.g., 24 hours) is configured When any batch or record remains Pending beyond the threshold Then an alert is sent via email/push within 15 minutes of breach and the dashboard shows an Aging badge with elapsed duration And the item displays a visible SLA timer indicating time since submission and threshold When the item’s status changes from Pending to Accepted or Rejected Then the SLA alert auto-clears and the Aging badge is removed within 2 minutes When the SLA threshold configuration is updated Then new breaches honor the updated value within 5 minutes and existing timers recalculate accordingly
Connector and Ingestion Failure Alerts
Given a payer connector or portal ingestion job is scheduled When a job fails or produces no receipts for a configured stale window (e.g., 2 hours) Then an alert is sent and the dashboard shows a red health indicator with last success timestamp and error message/code When the system retries per exponential backoff and max retries are exceeded Then an escalation notification is sent to the designated on-call role When the next successful run completes Then the health indicator returns to green within 2 minutes and a recovery notification is logged And users only see health for connectors scoped to their tenant
Multi-Tenant, Time Zone, CSV Export, and Mobile Accessibility
Given a user belongs to Tenant A When viewing the dashboard and receiving notifications Then only Tenant A data is visible or included; cross-tenant data access is blocked and audited Given Tenant A has a configured time zone When the user selects date ranges and views timestamps Then all times display in the tenant time zone and exports use ISO 8601 with timezone offset; backend stores UTC When the user clicks Export CSV on any list or drill-down Then a CSV downloads within 10 seconds and reflects the current filter context with columns: Payer, Batch, Record ID, Status, Reason Code(s), Timestamp, SLA Age; filename includes tenant and date range When the dashboard is viewed on a mobile device (≤414 px width) Then content is responsive with no horizontal scroll; touch targets ≥44 px; color contrast ≥4.5:1; all interactive elements have accessible names and are operable via screen reader and keyboard And key interactions (open filters, apply, drill-down) complete within 3 seconds on a 4G connection
Compliance-Grade Audit Logging
"As a compliance officer, I want an audit-ready trail of all Receipt Loop activity so that we can demonstrate control and pass audits without scrambling."
Description

Capture an immutable, tamper-evident audit trail of submissions, acknowledgments, user actions, corrections, approvals, resubmissions, and configuration changes. Store hashed references to raw and normalized artifacts, enforce least-privilege access, and encrypt PHI in transit and at rest. Provide retention controls (e.g., seven-year retention), legal hold, and one-click export of audit-ready reports showing who did what and when, aligning with HIPAA and SOC 2 expectations.

Acceptance Criteria
Immutability and Tamper Evidence for Audit Log
- Given an audit entry exists, When any user attempts to update or delete it via UI or API, Then the system denies the operation (HTTP 405/403) and records the attempt as a separate audit event. - Given a contiguous range of audit entries, When the hash chain (hash and prev_hash) is validated, Then all entries verify end-to-end and any break triggers a critical alert within 60 seconds. - Given time is NTP-synchronized, When an entry is written, Then the timestamp is recorded in UTC with millisecond precision and is monotonic within the same request context.
Event Coverage for Receipt Loop Transactions and Actions
- Given a submission, acknowledgment (999/277CA), portal receipt, rejection, correction, approval, resubmission, or configuration change occurs, When the action completes, Then exactly one audit entry is created capturing: event_type, actor_id, actor_role, action, target_type, target_id(s), batch/export_id, payer_id, acknowledgment_code, reason_codes, outcome, request_id, timestamp, source_ip, and user_agent. - Given a rejected acknowledgment with reason codes, When logged, Then the entry includes the reason codes and direct links to the export batch and specific record IDs to fix. - Given a resubmission after a correction, When logged, Then the entry references the original submission event_id and the approver’s event_id.
Hashed References to Raw and Normalized Artifacts
- Given a raw EDI file (e.g., 837/999/277CA) or normalized payload is stored, When the audit entry is created, Then SHA-256 digests of both raw and normalized artifacts are persisted with access-controlled URIs. - Given an artifact is downloaded via audit export, When its hash is recomputed, Then it matches the stored digest; on mismatch, the system blocks delivery and records an integrity_error audit event. - Given artifacts may contain PHI, When hash metadata is stored, Then no PHI is included in the metadata beyond non-sensitive identifiers and digests.
Least-Privilege Access to Audit Log
- Given defined roles (e.g., Compliance Officer, Admin, Operations Manager, Caregiver), When users access audit logs, Then only principals with Audit.Log.Read may view entries and only Admin/Compliance may export. - Given a user without required permission attempts access, When they call the API or UI, Then the response is 403 and an access_denied audit event is recorded with user_id, org_id, ip, and timestamp. - Given a permitted user views entries that include PHI, When displayed, Then field-level masking is applied by role and any unmask action requires JIT approval that is itself audited.
Encryption of PHI In Transit and At Rest
- Given API and UI traffic, When data is transmitted, Then TLS 1.2+ with strong ciphers and HSTS is enforced; non-TLS requests are rejected and audited. - Given audit data at rest, When stored in databases and object storage, Then AES-256 encryption with KMS-managed keys is enforced and keys are rotated at least annually or upon incident. - Given snapshots and exports, When created, Then they are encrypted and access requires role-based authorization; any attempt to create or serve an unencrypted export fails and is audited.
Retention and Legal Hold Controls
- Given default retention, When audit entries exceed seven years and are not under legal hold, Then they are purged by an automated job and a purge summary entry is recorded with counts and time window. - Given a legal hold is applied to a tenant, entity, or date range, When the purge job runs, Then held entries are preserved and any delete attempt is blocked and audited. - Given retention settings are changed, When saved, Then the change requires dual approval and the audit entry records before/after values, approvers, and justification.
One-Click Audit-Ready Report Export
- Given a compliance officer selects time range, event types, and entities, When Export is clicked, Then PDF and CSV reports are generated within 60 seconds containing who, what, when, where (IP/device), and links to affected records. - Given the report is generated, When downloaded, Then a manifest includes report_id, UTC generation time, record count, and SHA-256 checksums; recomputation matches the checksums. - Given the export contains PHI, When the link is issued, Then it requires MFA, expires within 15 minutes, and each download is logged as an audit event.

Fix-Forward Queue

A focused workbench for export exceptions where users can edit fields inline with compliance guardrails, apply corrections across similar claims, and add standardized reason notes. Every change is audited, synced to source records when appropriate, and ready for one-click re-export.

Requirements

Compliance-Guided Inline Editing
"As an operations manager, I want to edit exception fields inline with compliance guidance so that I can fix issues quickly without creating new errors."
Description

Provide inline editing of exception-related fields directly within the Fix-Forward Queue with payer- and state-specific validation rules, field masks, and dependency checks. Enforce guardrails that block noncompliant combinations (e.g., visit duration vs. service code constraints), surface real-time error messages and corrective suggestions, and restrict edits based on role permissions. Support autosave with optimistic concurrency, dirty-state indicators, undo for the last action, and keyboard-first navigation. Preserve mobile-first usability with responsive layouts and accessible controls.

Acceptance Criteria
Payer/State-Specific Validation and Masks
Given a record in the Fix-Forward Queue with payer Medicaid-X and state NY and a user with Edit Exceptions permission When the user edits the HCPCS service code field Then an input mask enforces format ^[A-Z][0-9]{4}$, auto-upcases alphabetic characters, and rejects nonconforming input Given a state rule that NY forbids modifier U7 for code T1019 When the user enters modifier U7 for service code T1019 Then the modifier field becomes invalid, dependent save/apply actions are blocked, and the error references rule ID NY-MOD-017 Given a dependency rule that Place of Service must be 12 for service T1019 When the user changes Place of Service to 11 Then validation fails, focus moves to the invalid field, and the system proposes the correction Set Place of Service to 12 Given all field-level and dependency validations pass When the user stops editing Then the autosave controller permits saving the row
Guardrails for Duration vs Service Code
Given payer policy that service code T1002 requires duration between 30 and 120 minutes inclusive When the user enters a duration of 25 minutes Then an inline error states Duration below minimum for T1002 (min 30), and the edit cannot be saved Given the user corrects the duration to 30 minutes When validation re-runs Then the error clears and the change is eligible for autosave Given service code S5130 requires whole 15-minute increments When the user enters 34 minutes Then the system blocks the edit, displays error Requires 15-minute increments, and offers suggestions 30 or 45
Real-Time Errors and Corrective Suggestions
Given any field fails validation due to payer/state rule When the field value changes and the user blurs the field or pauses typing Then an inline error and field outline appear within 500 ms, the message cites the rule source (payer/state) and short reason, and at least one valid corrective suggestion is shown Given a corrective suggestion is displayed When the user selects Apply suggestion Then the field updates to the suggested value and re-validates automatically Given multiple validation failures in the row When the row is in error state Then a consolidated error popover lists each error in field order with a count badge on the row
Role-Based Edit Restrictions
Given defined roles Billing Admin (full edit), Billing Reviewer (notes-only), and Caregiver (no access) When a Caregiver attempts to access the Fix-Forward Queue Then access is denied with the standard insufficient permission screen and HTTP 403 for API calls Given a Billing Reviewer opens a row When they focus a restricted field (e.g., service code, modifier, POS) Then the field renders read-only with a lock icon and tooltip explaining required role, and API attempts to update return HTTP 403 Given a Billing Admin edits an allowed field When the change validates Then the edit is permitted and proceeds to autosave
Autosave with Optimistic Concurrency
Given autosave is enabled and connectivity is available When a user completes an edit that passes validation Then the change is autosaved within 1 second and a Saved timestamp appears on the row Given another user has modified the same record since it was loaded When autosave attempts to persist changes Then a version conflict is detected, no overwrite occurs, the user sees a conflict dialog with field-level diffs, their local edits are preserved in the form, and actions are offered: Reload Remote (default) and Overwrite with My Changes Given the user selects Reload Remote When the record refreshes Then local, unsaved edits remain in the input buffer for review and must pass validation again before saving Given an autosave completes successfully When the save finishes Then an immutable audit entry is created with user ID, UTC timestamp, field name, old value, new value, applied rule IDs, and source-record sync status; the entry is retrievable via the audit API within 2 seconds
Dirty-State Indicators and Undo
Given the user has modified at least one field in a row When the change is not yet saved Then a dirty-state indicator appears at the row level and on each changed field Given an autosave succeeds When the server acknowledges the save Then dirty-state indicators clear for the saved fields Given a last action exists When the user invokes Undo Last Action (Ctrl+Z on desktop or taps Undo on mobile) within 60 seconds or before another edit Then the most recent atomic change to the active row is reverted, validations re-run, and the state is re-saved if valid Given there is no action to undo When the user invokes Undo Then no changes occur and a toast Noting to undo appears
Keyboard-First Navigation and Mobile Accessibility
Given a desktop user When navigating the Fix-Forward Queue using only the keyboard Then the user can enter edit mode, move between fields with Tab/Shift+Tab, commit with Enter, cancel with Esc, open suggestions with Alt+S, and trigger Undo with Ctrl+Z without using a mouse Given a mobile device with viewport width ≤ 375 px When the user opens an editable row Then fields stack vertically, touch targets are ≥ 44 px in height, numeric fields invoke a numeric keypad, and edit controls remain fully visible without horizontal scrolling Given a user with a screen reader When focus enters an invalid field Then ARIA-invalid and aria-describedby are set, the error text is announced, a visible focus indicator is present, and color contrast meets WCAG AA
Similar-Claim Bulk Correction
"As a billing coordinator, I want to apply a correction to similar claims in one action so that I can resolve repetitive errors faster and reduce manual work."
Description

Detect and group exceptions that are materially similar using configurable rules (payer, error code, service line, date range, location, caregiver) and enable preview-and-apply bulk corrections. Allow users to scope changes to selected fields, simulate impact with a diff view, and apply changes atomically per claim with partial-failure reporting and easy rollback. Provide progress feedback, rate limiting, and safeguards to avoid over-application (e.g., cap batch size, require confirmation for high-impact updates).

Acceptance Criteria
Group Exceptions by Configurable Similarity Rules
Given export exceptions exist in the Fix-Forward Queue And similarity rules are configured using payer, error code, service line, date range, location, and caregiver When the user selects "Find Similar" for a seed exception Then the system groups exceptions where all selected attributes match the seed according to the active rules And each group displays total count and key attributes And adjusting any rule or filter recalculates groups and refreshes results within 3 seconds for up to 2,000 exceptions And only exceptions within the selected date range are included And the user can Select All in a group in one action
Preview and Diff Bulk Corrections Before Apply
Given the user has selected a group of exceptions And has chosen fields to correct with proposed values When the user opens the Preview Then a per-claim, field-level before/after diff is displayed And a summary shows number of claims impacted, number blocked by guardrails, and fields to be changed And running Validate performs compliance checks and lists specific errors per claim without applying changes And Apply remains disabled until validation completes without critical errors
Scoped Field Changes Across Selected Claims
Given the user selects specific fields to update (e.g., modifier, service date, location) When the bulk correction is applied Then only the selected fields are updated on each claim; all other fields remain unchanged And data types and formats conform to payer schema (e.g., date format, code sets); invalid entries are blocked with inline errors And required standardized reason note is selected or completed before Apply is enabled And dynamic values (e.g., date offsets like +1 day) resolve deterministically per claim before application
Atomic Apply per Claim with Partial-Failure Reporting and Rollback
Given a validated batch of N claims is ready to apply When the user clicks Apply Then each claim update is executed atomically so that no partial updates are committed on failure And a completion report shows success and failure counts with per-claim error codes/messages And failures do not prevent other claims from being applied And a one-click Rollback is available for the applied subset for 24 hours, restoring original values And the audit log records both apply and rollback events with user, timestamp, claim IDs, and reason note
Progress Feedback and Rate Limiting During Bulk Apply
Given a bulk apply operation is in progress Then a progress indicator displays percent complete, processed/total counts, success/failure tallies, and ETA, updating at least every 2 seconds And the system respects the configured rate limit (e.g., 100 requests/minute) and applies exponential backoff on 429/503 responses And each claim is retried up to 3 times on transient errors; after final retry it is marked Failed with the last error And the user can pause and resume processing without losing state
Safeguards Against Over-Application
Given a bulk correction would affect more than the threshold (e.g., 100 claims) or change high-impact fields (e.g., payer, service line) When the user attempts to Apply Then the system presents a confirmation dialog summarizing impact and requires typed confirmation and a reason note And the default maximum batch size of 500 claims cannot be exceeded without an admin override up to 2,000 claims And batches spanning multiple payers are blocked unless the user explicitly checks Allow multi-payer and re-confirms
Audit, Sync to Source, and One-Click Re-export Readiness
Given bulk corrections were applied successfully Then an audit record is created per claim capturing before/after values, user, timestamp, similarity rules used, batch ID, and reason note And, where configured, updates are synced to source records with success acknowledgments; any sync failure flags the claim and does not clear its exception status And claims with all export errors resolved are marked Ready to Re-export And clicking Re-export triggers export per payer and displays per-claim export status (Success/Failed with code)
Standardized Reason Notes Library
"As an operations manager, I want standardized reason notes I can quickly select and auto-fill so that my corrections are consistent and audit-ready."
Description

Offer a curated library of standardized reason notes with payer mappings and templates, requiring a note for specific change types and automatically attaching the selected reason to audit logs and export payloads where supported. Support tokenized templates (e.g., {service_code}, {date}) autocompleted from context, allow agency-level customization with admin approval workflow, and provide search, favorites, and recently used notes. Ensure consistency for audits and facilitate downstream reconciliation by including normalized codes.

Acceptance Criteria
Note Required on Compliance-Sensitive Inline Edit
Given a user is editing a Fix-Forward Queue item with a field flagged as compliance-sensitive When the user attempts to save changes without selecting a reason note Then the save action is blocked and a reason note selector is displayed And the save/continue buttons remain disabled until a reason is selected When the user selects a reason note and saves Then the change is saved, the selected reason is persisted on reload, and the item becomes eligible for re-export And the UI records that a reason was required and provided for this change type
Template Tokens Auto-Populate From Context
Given a standardized reason note template contains tokens (e.g., {service_code}, {date}, {member_id}) And the current exception has corresponding context values When the user previews the note in the selector Then tokens are auto-resolved with context values in the preview and final saved note And unresolved tokens (no available context) are visibly highlighted and require manual input before save When the user manually supplies values for unresolved tokens Then validation passes and the saved note contains the resolved values And the template definition remains unchanged
Reason Auto-Attached to Audit Log and Export Payload
Given a change is saved with a selected standardized reason note Then an audit log entry is created that includes: reason_id, reason_label, normalized_code(s), resolved_note_text, user_id, timestamp, old_value, new_value, and record_reference And the audit log entry is retrievable via the audit API with the same identifiers When generating an export payload for a payer that supports reason codes/fields Then the payload includes the payer-mapped reason fields as configured and passes schema validation When generating an export payload for a payer that does not support reason codes/fields Then the payload omits those fields without error, while the audit log still contains the reason details
Agency Custom Notes with Admin Approval Workflow
Given a non-admin user drafts a custom reason note (label, template text, normalized codes, payer mappings) When the user submits the draft Then its status becomes Pending Approval and it is not visible in the general note selector When an admin reviews and approves the draft Then the note status becomes Active, version 1 is created, and it becomes available in the selector When an admin rejects the draft with a comment Then the proposer is notified and the note remains unavailable When an Active note is edited Then a new Pending version is created; the current Active version remains available until approval; upon approval, the new version replaces it and the version history is retained And deactivating a note removes it from future selection but preserves historical references in audits/exports
Search, Favorites, and Recently Used Notes
Given a library of up to 5,000 notes When a user searches by keyword, payer, or normalized code Then results are returned within 300 ms median and include matches on label, template text, tokens, normalized codes, and payer mappings And search supports prefix match and fuzzy match with edit distance <= 1 When a user marks a note as favorite Then it appears in the user's Favorites list and persists across sessions and devices When a user applies notes during corrections Then a Recently Used list shows the last 10 distinct notes for that user, most-recent-first, with no duplicates and updates immediately after each save
Payer Mapping and Normalized Code Validation
Given a payer context is present for the exception When a user selects a standardized reason note Then the UI displays the payer-specific mapping that will be used on export And saving is blocked if the selected note lacks a mapping for the current payer And saving is blocked if required normalized code(s) are missing or not in the allowed code set When validation passes and the user saves Then the audit entry includes the normalized code(s) and the payer mapping id/code used And the export shows the correct payer-specific fields where supported
Bulk Apply Reason Across Similar Claims with Error Handling
Given a user selects multiple similar exceptions in the Fix-Forward Queue When the user applies a single reason note to all selected items Then the system attempts to apply and save the reason per record independently And for records where tokens cannot be fully resolved from context and no manual input is provided, those records are skipped and listed with specific missing fields And a summary is displayed showing counts for attempted, succeeded, and failed, with failure reasons And audit entries are created per successfully updated record with the selected reason and resolved text And re-export is queued only for successfully updated records; skipped/failed records remain selected for further action
Comprehensive Audit Trail & Versioning
"As a compliance officer, I want a complete, immutable history of all corrections so that I can demonstrate regulatory compliance during audits."
Description

Record an immutable, append-only audit trail for every change in the Fix-Forward Queue including before/after values, user, timestamp, reason note, source of change (UI, bulk, API), and correlation IDs to linked records. Provide field-level versioning with a readable timeline, exportable audit reports (CSV/JSON), and APIs for compliance review. Enforce tamper-evidence, time synchronization, and retention policies aligned with regulatory requirements.

Acceptance Criteria
Inline Edit Audit Entry Creation
Given a user edits a field inline in the Fix-Forward Queue and provides a Reason Note When the user saves the change Then the system creates one append-only audit entry within 1 second containing: recordType, recordId, fieldName, beforeValue, afterValue, userId, userRole, timestampUtc (ISO-8601), timeSource, source='UI', correlationIds (claimId, visitId, exportId where applicable), reasonNoteId, requestId And the entry is immediately retrievable via UI timeline and Compliance API And the entry cannot be updated or deleted by any role; attempts are denied and logged as security events And the field’s version number increments by 1 and is visible in the timeline And saving is blocked if Reason Note is blank
Bulk Correction Batch Audit
Given a user applies a bulk correction to N claims with a single confirmation and Reason Note When the bulk action executes Then the system records a parent batch audit entry (batchId, source='BULK') and one child audit entry per affected record/field with full before/after, userId, timestampUtc, correlationIds including batchId And each child entry inherits the Reason Note and indicates the specific field changed And if any item fails, its child entry is written with status='Failed' and errorCode while successful items are written with status='Succeeded' And the UI and API return counts: requested=N, succeeded>=0, failed>=0, and list of failed recordIds And each changed record is linked to its next re-export attempt via exportAttemptId once triggered
API-Sourced Change Capture
Given an authenticated client submits an API request that mutates one or more fields with a provided Reason Code/Note and an idempotencyKey When the request is accepted Then the system creates one audit entry per field modified with source='API', clientId, tokenSubject, ip, requestId, idempotencyKey, beforeValue, afterValue, timestampUtc, correlationIds (recordId, claimId/visitId) And repeated requests with the same idempotencyKey do not create duplicate audit entries And requests missing a Reason Code/Note are rejected with HTTP 400 and no audit entry is written
Field-Level Version Timeline & Diff
Given a user with Audit_View permission opens the Audit tab for a record When they filter by field name and date range and load the timeline Then the UI displays the most recent 50 audit entries within 1 second, sorted by timestampUtc desc, with totalCount available And each entry shows versionNumber, beforeValue -> afterValue, user, role, source, reason note summary, and correlationIds; sensitive values are redacted per role-based policy And when the user selects two entries, a side-by-side diff is shown and the elapsed time between versions is displayed And each entry shows chainVerificationStatus='Valid' or 'Invalid'; invalid entries trigger a visible warning
Audit Report Exports (CSV/JSON) & Compliance API
Given a compliance officer applies filters (date range, userId, source, fieldName, recordId/batchId) to the audit log When they request an export as CSV or JSON Then a downloadable file begins within 3 seconds and contains all matching entries with a documented schema including: auditId, recordType, recordId, fieldName, beforeValue, afterValue, versionNumber, userId, userRole, timestampUtc, timeSource, source, correlationIds, reasonNote, chainHashes (contentHash, priorHash), chainVerificationStatus And exports larger than 100,000 entries are streamed in paged chunks without data loss and include a SHA-256 digest file for integrity verification And the Compliance Review API provides equivalent filtering, stable pagination (pageSize, nextToken), and returns the same schema; counts in API and exported file match for the same filters
Tamper-Evidence & Time Synchronization
Given the audit log contains K entries When chain verification runs Then each entry includes contentHash and priorHash forming an unbroken chain; verification returns 'Valid' for untampered data or identifies the first invalid auditId on tamper And all timestamps are stored in UTC with timeSource metadata and driftMs; if driftMs > 5000, the entry is flagged driftExceeded=true and the UI shows a warning; if driftMs > 30000, save is blocked and the user is instructed to resync time And an API endpoint /audit/verify returns overall chain status and details for the requested range
Retention Policy & Legal Hold Enforcement
Given an organization retention policy is configured (default >= 6 years) and optional legal holds exist When audit entries reach end-of-retention with no legal hold Then the system purges them from primary storage and search indexes and writes a purge audit record containing time window, count purged, and a digest of purged IDs And queries to UI/API for purged auditIds return 410 Gone And entries under legal hold are not purged; removing the hold re-eligibilizes them for purge on next cycle And manual deletion of individual audit entries is not permitted by any role
Source Record Sync & Conflict Resolution
"As an operations manager, I want my corrections to update the underlying records safely so that the same errors don’t recur on future exports."
Description

Propagate approved corrections from exceptions back to source records (e.g., visit notes, caregiver profile, payer settings) when appropriate, using idempotent updates and provenance tagging. Detect and resolve conflicts via optimistic locking and a human-friendly diff/merge screen, with options to keep source, keep fix, or merge field-by-field. Queue background sync jobs with retries, send notifications on conflicts, and expose a sync status indicator per item.

Acceptance Criteria
Idempotent Source Sync on Approved Correction
Given an exception item with an approved correction mapped to a single source record When the user initiates propagation to the source record Then only fields with changed values are updated and unchanged fields remain untouched And a provenance tag including exception ID, user ID, timestamp, and reason code is attached to the source record update And repeating the same propagation request results in a no-op with the source record version and updated_at unchanged And the item’s sync status becomes "Synced" and re-export is enabled
Optimistic Lock Conflict Detection
Given the exception item carries the source record version token captured at export time When the propagation request’s version token does not match the current source record version Then the write is blocked and no source changes are persisted And the item’s sync status becomes "Conflict" And a conflict record is created linking the exception and source record and includes the mismatched versions And the user is presented with an option to open the diff/merge screen When the version token matches Then the update proceeds and the item’s sync status becomes "Synced"
Human-Friendly Diff/Merge Resolution
Given an item in "Conflict" status When the user opens the diff/merge screen Then the UI displays source vs. fix values with per-field highlighting and a merged preview And for each conflicting field the user can select Keep Source, Keep Fix, or enter a validated custom value And field-level compliance validations run before save and block invalid merges with clear error messages When the user confirms Save Merge Then the merged values are written atomically to the source record with a provenance map of field decisions And the item’s sync status updates to "Synced" and the conflict is marked resolved
Background Sync Job with Retries and Status Indicator
Given an approved correction ready for propagation When propagation is initiated Then a background job is enqueued within 2 seconds and the item status changes from "Pending" to "Syncing" And on transient failures (HTTP 5xx, network timeouts) the job retries with exponential backoff up to 3 attempts And on success the item status becomes "Synced" and records success timestamp and job ID And on exhausting retries the item status becomes "Failed" with last error captured And the UI sync status indicator reflects one of: Pending, Syncing, Synced, Failed, Conflict and updates within 5 seconds of state changes
Conflict Notifications to Stakeholders
Given a propagation attempt results in a conflict When the conflict is recorded Then an in-app alert is shown to the initiating user and users with the Ops Manager role And an email is sent to subscribed recipients with a link to the diff/merge screen within 1 minute And duplicate notifications for the same item are suppressed for 24 hours unless the conflict state changes When the conflict is resolved (merged or dismissed) Then a resolution notification is logged and the original alert is cleared
Comprehensive Audit Trail and Provenance Tagging
Given any sync attempt (success, no-op, conflict, failure, merged) When the attempt completes Then an immutable audit entry is created capturing exception ID, source record type/ID, fields attempted, before/after values for changed fields, user, timestamps, job ID, outcome status, and provenance tags And audit entries are queryable via UI/API by date range, user, status, record type/ID, and export batch And audit entries for no-op idempotent attempts record the reason as "No change" with zero fields modified And audit logs are read-only and include a cryptographic hash to detect tampering
One-Click Re-Export with Preflight Validation
"As a billing coordinator, I want to re-export corrected items in one click so that I can close exceptions quickly and get claims out without duplication."
Description

Enable a one-click re-export that bundles corrected items, runs preflight checks (completeness, format, payer-specific rules), and submits via the appropriate connector (e.g., EDI, CSV, API). Provide idempotent submission with export receipts, per-item status, retry with exponential backoff, and clear error messaging for any remaining blockers. Maintain an export ledger to prevent duplicates and support audit-ready re-export reports.

Acceptance Criteria
One-Click Bundle Creation of Corrected Items
Given there are eligible corrected items and the user has Re-Export permission, When the user clicks One-Click Re-Export, Then a single batch is created containing exactly the eligible corrected items under current filters/selections and the batch_id is unique. Given no items are eligible for re-export, When the user attempts One-Click Re-Export, Then the action is disabled or a non-blocking message "No eligible items to export" is shown and no batch is created. Given items have both queue values and newer source-record values, When the batch is created, Then the newer values are used and per-item sync_applied=true and sync_timestamp are recorded.
Preflight Validation of Corrected Items
Given completeness, format, and payer-specific rule sets exist, When preflight runs on the created batch, Then each item is evaluated against all active rules and assigned pass, warning, or blocking status with machine-readable codes and human-readable messages. Given at least one item has a blocking error, When preflight completes, Then re-export is prevented for those items, they are excluded from submission, and the user sees per-item error details and a total blocker count; warnings do not block export. Given all items pass blocking checks, When preflight completes, Then submission proceeds automatically without further user input. Given a batch of up to 200 items, When preflight runs, Then the 95th percentile runtime is <= 3 seconds.
Connector Selection and Payload Formatting
Given items belong to multiple payers and transport types (EDI, CSV, API), When submission begins, Then items are partitioned by connector and a compliant payload is generated per partition using the configured schema and encoding. Given EDI payloads are generated, When submission occurs, Then ISA/GS envelopes, ST/SE segment counts, and control numbers are valid and acknowledgments (TA1/999) are requested. Given CSV payloads are generated, When submission occurs, Then headers, delimiters, quoting, and file naming follow the configured template and the file validates against the target schema. Given API payloads are generated, When submission occurs, Then requests include required authentication and idempotency keys, and 2xx responses are treated as success while 4xx/5xx are captured as errors with response bodies logged to the receipt. Given a connector is misconfigured or unavailable, When submission starts, Then the affected partition is blocked with a clear error including connector_id and next steps.
Idempotent Re-Export with Ledger De-duplication
Given an export ledger records item_id, connector_id, normalized_payload_hash, and batch_id, When the same item is re-exported within the idempotency window with an identical normalized_payload_hash and connector_id, Then no duplicate submission is sent and the prior receipt is surfaced. Given a batch was partially successful, When re-export is triggered, Then only items without a terminal success status are submitted and skipped items are annotated with reason "already_exported". Given concurrent clicks or repeated API calls occur within 60 seconds, When idempotency keys match, Then exactly one submission is performed per connector partition. Given ledger entries are written, When submission completes, Then each item has an immutable record with timestamps, actor_id, attempt_count, idempotency_key, and terminal_status.
Export Receipts and Per-Item Status Tracking
Given any connector submission completes, When a receipt/acknowledgment is returned or derived, Then a batch-level receipt with transaction identifiers and a per-item status map (Queued, Sent, Accepted, Rejected, Error, Skipped, Pending ACK) is stored and visible in the UI within 5 seconds. Given EDI connectors, When functional acknowledgments (TA1/999/277CA) arrive, Then per-item statuses are updated by correlating control numbers; missing ACK beyond SLA marks items as "Pending ACK" with aging time. Given API connectors, When HTTP responses are received, Then correlation IDs are stored and per-item statuses updated based on response mapping rules. Given CSV transfers, When file handoff or SFTP success is confirmed, Then per-item statuses are marked Sent and later updated to Accepted/Rejected upon downstream feedback ingestion. Given a receipt exists, When a user downloads it, Then it is available as JSON and CSV with batch_id, timestamps, connector, per-item outcomes, and error details.
Automated Retries with Exponential Backoff
Given a submission attempt fails with a transient error (timeouts, 429, 5xx, connection reset), When retry logic executes, Then retries use exponential backoff with jitter starting at 2s, multiplier 2x, capped at 60s, with a maximum of 5 attempts per item. Given a submission fails with a permanent error (4xx validation, schema error), When classified, Then no retries are attempted and the item status is set to Error with the permanent error code and message. Given retries are scheduled, When viewed in the UI, Then next_retry_at and attempt_count are shown per item and updated after each attempt. Given all retry attempts are exhausted, When the final attempt fails, Then the item is marked Error and a clear, actionable message is displayed referencing the failing rule or connector configuration.
Audit-Ready Re-Export Reporting
Given a user selects a date range or batch_id, When generating the re-export report, Then the report includes for each item: item_id, patient_id, payer, correction summary, preflight results, connector_id, idempotency_key, attempt history, receipts, actor_id, timestamps, and final status. Given the report is generated, When validated against the export ledger, Then all entries reconcile 1:1 and no item appears more than once with a terminal success within the range. Given the report is generated, When downloaded, Then it is available as CSV and PDF, includes a report_hash for integrity, and renders within 5 seconds for up to 1,000 items. Given access controls are enforced, When a user without permission attempts to view the report, Then access is denied and no report data is returned.
Queue Triage, Filters, and Assignment
"As a team lead, I want to filter and assign exceptions with SLA visibility so that my team works the highest-impact items first."
Description

Provide robust triage tools including advanced filters (payer, error code, severity, age, assignee), saved views, sorting, and bulk selection. Allow assigning items to users or teams, show SLA badges and aging indicators, and support real-time counts with websocket updates. Offer keyboard shortcuts, pagination/infinite scroll, and notifications for new assignments or nearing SLAs to keep work flowing efficiently.

Acceptance Criteria
Advanced Filtering and Saved Views
Given I am on the Fix-Forward Queue with access to filters When I apply filters for payer=Aetna, error code=R123, severity=High, age>=7 days, assignee=Unassigned simultaneously Then only items matching all selected filters are displayed and the total matching count reflects the filtered set Given filters are applied When I click Save View, provide a unique name, and set it as default Then the view is persisted to my profile and auto-applies on my next visit Given a saved view exists When I load it from the view picker Then the exact filter values and sort order saved with the view are restored Given a saved view contains an invalid value (e.g., a removed payer) When I attempt to load the view Then the system loads remaining valid filters and shows a non-blocking warning identifying the invalid filter Given a saved view exists When I choose Delete View and confirm Then the view is removed and no longer appears in the picker; if it was default, the default reverts to "All Items"
Sorting Controls and Selection Persistence
Given a list of results is displayed When I sort by Age ascending Then items are ordered from newest to oldest by created time; when sorting by Age descending, items are ordered from oldest to newest Given the list is displayed When I sort by Payer, Error Code, Severity, or Assignee Then items are ordered alphanumerically with a stable secondary sort by created time to prevent visual jitter Given I have selected multiple items When I change the sort order or sort column Then my current selection is preserved for items that remain in the current filtered set Given I return to the queue with the same saved view active When the page loads Then the last used sort order for that view is remembered and applied
Bulk Selection and Assignment with Permissions and Conflict Handling
Given results span multiple pages or an infinite scroll list When I click "Select all in results" Then the system selects all items that match the current filters and displays the total count selected Given I have a large selection When I assign the selection to a specific user or team Then all selected items are updated with the new assignee, the assignee label/avatar appears, and a success banner shows the number updated Given some items changed state or assignee during my bulk operation When the assignment completes Then conflicted items are listed with reasons and are left unmodified, while non-conflicted items remain assigned Given my role lacks assignment permissions for a subset of items When I perform a bulk assign Then those items are skipped with a "permission denied" reason in the results summary and are not modified Given a bulk assignment completes When I open any affected item’s activity log Then an audit entry records who assigned, to whom, when, and that the method was "bulk"
SLA Badges and Aging Indicators
Given an item has an SLA due time When the item row is rendered Then an SLA badge shows status color: green (>=24h remaining), amber (4–24h remaining), red (<4h remaining), and red+icon when overdue Given I hover or focus the SLA badge When the tooltip appears Then it shows "Due <timestamp> (<time remaining>)" in my local timezone Given time elapses while I view the queue When an item crosses an SLA threshold or becomes overdue Then the badge updates to the new state within 60 seconds without a page refresh Given an item row is displayed When viewing the aging indicator Then it shows a humanized duration since exception creation and reveals the exact created timestamp on hover/focus
Real-time Counts and Notifications via WebSockets
Given a websocket connection is active When items matching my current filters are added, resolved, or reassigned Then the queue count, per-view counts, and visible list update within 3 seconds without a page reload Given an item is assigned to me by another user or an automation When the assignment event occurs Then an in-app notification appears within 5 seconds with item ID, payer, and a deep link; the item is highlighted in the list Given an item crosses the "SLA approaching" threshold (2 hours before breach) When the threshold is reached Then I receive an in-app alert and the item is highlighted; if my notification preference is muted, only the highlight occurs Given the websocket disconnects When the connection drops Then an offline indicator appears and the system falls back to polling every 30 seconds, resuming realtime updates upon reconnection without duplicate notifications
Pagination and Infinite Scroll Navigation
Given default preferences are in effect When I scroll within 200px of the end of the list Then the next 50 items load and append with a loading indicator and no duplicate rows Given I enable "Use pagination" in preferences When I return to the queue Then numbered pages are shown with selectable page sizes of 25, 50, or 100 and this preference persists across sessions Given pagination is enabled When I navigate between pages or open and return from an item Then my filters, sort order, and the previous page and scroll position are preserved Given network latency or errors occur while loading more items When the fetch exceeds 1.5s at P95 or fails Then a non-blocking indicator is shown and failures provide a retry action without clearing currently loaded items
Keyboard Shortcuts for Triage Efficiency
Given the queue view has focus When I press "?" Then a shortcuts help modal opens listing all available shortcuts and their actions Given the list view is focused When I press J or K Then focus moves to the next or previous item respectively; pressing Enter opens the focused item’s detail Given the filters panel is closed or open When I press F Then the filters panel toggles; pressing Ctrl/Cmd+Enter applies the current filters Given items are visible When I press Space Then the focused item toggles selection; pressing Shift+A selects all items matching current filters Given a text input has focus When I type Then global shortcuts are temporarily disabled to avoid conflicts, and all shortcuts are accessible via ARIA-labeled controls in the help modal

RulePulse Updates

A continuously curated rulefeed that monitors payer changes and updates Portal Presets automatically. Users get concise change alerts plus a sandbox mode to test exports against the new rules before go-live, preventing surprise denials when payers shift requirements.

Requirements

Payer Rule Monitoring Engine
"As an operations manager, I want CarePulse to automatically detect and interpret payer rule changes so that our configurations stay current without manual research."
Description

Continuously monitor and ingest payer policy updates from payer portals, EDI companion guides, bulletins, APIs, and uploaded documents; normalize them into the RulePulse schema; detect and classify deltas affecting documentation fields, billing codes, export formats, and timing; tag by payer, plan, region, line of business, and effective/expiry dates; deduplicate noise; surface a QA review queue; and publish validated changes to downstream services. Integrates with CarePulse by feeding Auto-Update of Portal Presets, Alerts, Sandbox Validation, and Audit Trail. Includes resilience (retries, backoff), observability (metrics, tracing), and secure handling of credentials and documents.

Acceptance Criteria
Multi-Source Payer Update Ingestion
Given the engine is scheduled hourly When a payer portal posts a new bulletin/policy (HTML, PDF, XML) or API payload is updated Then the engine fetches within 15 minutes, stores the raw artifact, and records source metadata (URL, payer, plan if available, timestamp, checksum) Given an updated EDI companion guide is detected at a configured source When the scan runs Then the engine detects the version change, downloads the file, and enqueues it for parsing Given a user uploads a document (PDF, DOCX, CSV, TXT, ZIP) When the file passes virus scan and size limits (<=100MB per file, <=500MB per ZIP) Then the engine accepts, encrypts at rest, and enqueues; otherwise it rejects with a clear error reason Given pagination or authentication is required When fetching content Then the engine uses stored credentials, handles pagination up to the configured limit, and retries 3 times on 5xx errors with exponential backoff
Normalization to RulePulse Schema
Given a fetched artifact has been parsed When normalization runs Then the output conforms to RulePulse schema v1.x and includes required fields (rule_id, payer_id, plan_id, region, lob, category, field_name, code_system, code, export_format, timing, effective_date, expiry_date, source_ref, version, created_at) Given required fields are missing or invalid When normalization occurs Then the record is quarantined with a machine-readable error code and is not published downstream Given dates and codes are extracted When normalization occurs Then dates are ISO 8601 UTC and code systems are labeled (HCPCS, CPT, ICD-10, NPI, X12) Given free-text contains PHI/PII When normalization occurs Then PHI/PII is not persisted in free-text fields; only rule metadata is stored
Delta Detection and Classification
Given an active baseline of rules exists for a payer/plan/region/lob When new normalized records are available Then the engine computes field-level diffs, sets change_type in {add, modify, deprecate, supersede}, and impact_area in {documentation_fields, billing_codes, export_formats, timing} Given a modify change is detected When classification completes Then before/after snapshots are stored with field-level diffs and a human-readable summary (e.g., "CPT 99345 -> 99347") Given a new record is materially identical to an existing rule When similarity >= 0.98 by configured algorithm Then the change is marked as noise and suppressed Given overlapping updates with conflicting effective windows are detected When classification runs Then the change is flagged as conflict and routed to QA
Tagging and Effective Dating
Given a normalized rule When tagging runs Then the rule is tagged with payer, plan, region, lob, jurisdiction, effective_date, and expiry_date; missing dates default to effective_date = publication_date and expiry_date = null Given multiple rules overlap for the same key When precedence is evaluated Then explicit effective_date > bulletin priority > latest publication_date determines active rule; lower-priority rules are marked superseded Given a rule with regional qualifiers (state or MAC) When tags are applied Then region codes comply with ISO 3166-2 or configured MAC codes Given a sunset/expiry notice exists When tagging runs Then expiry_date is set and downstream notifications are scheduled for T-30, T-7, and T-1 days
Deduplication and QA Review Queue
Given multiple new records with similar content When similarity score >= 0.90 or source checksum matches Then the records are grouped, a canonical record is retained, and duplicates are linked and suppressed from publish Given a change is high-impact, low-confidence (<0.80), or conflicting When triage runs Then a QA task is created with severity, assignee, due date SLA (High: 8 business hours, Medium: 2 business days, Low: 5 business days), and required actions Given a QA approver reviews a task When they approve Then the change is marked validated with approver, timestamp, and note; when rejected, the change is returned to backlog with a required rejection reason Given a QA task exceeds SLA When the due time passes Then the system escalates via team channel and manager email and logs the breach
Publish Validated Changes to Downstream Services
Given a change is validated (auto or QA-approved) When publish is triggered Then a versioned, idempotent event is emitted to the message bus and Auto-Update of Portal Presets, Alerts, Sandbox Validation, and Audit Trail are updated within 5 minutes Given a downstream service is unavailable When delivery fails Then retries use exponential backoff for up to 24 hours; after exhaustion the event is dead-lettered and an alert is raised Given multiple changes affect the same preset When applying updates Then updates are applied in effective_date order with optimistic concurrency; conflicts are logged and the preset remains in its last known good state Given a publish occurs When auditing runs Then an immutable audit record captures actor/system, timestamps, change_id, before/after versions, affected entities, and trace_id
Non-Functional: Resilience, Observability, and Security
Given transient network/API errors or rate limits When fetching, parsing, or publishing Then the engine uses jittered exponential backoff, honors per-source rate limits, and prevents thundering herd on restarts Given ingestion and processing of artifacts When metrics are emitted Then dashboards expose ingestion_count, parse_success/fail, normalization_success/fail, delta_classified_count, publish_success/fail, and latency p50/p95/p99; traces propagate source_ref and rule_id across services Given credentials for payer portals/APIs When accessed Then secrets are stored in a secrets manager, rotated per policy, never logged, and scoped by least privilege; documents are encrypted at rest (AES-256) and in transit (TLS 1.2+) Given internal user/service access When viewing artifacts or approving QA Then RBAC enforces authorized roles only and all access events are audited
Preset Auto-Update with Versioning & Rollback
"As a system admin, I want Portal Presets to auto-update with versioning and rollback so that we can adopt new rules safely and recover quickly if something breaks."
Description

Automatically apply validated rule changes to Portal Presets with semantic versioning, human-readable diffs, pre-apply impact analysis, and rollback. Propagate updates to visit note templates, validation rules, export mappings, and scheduling constraints where applicable. Enforce guardrails (e.g., blocking unsafe updates, staging in test orgs first), idempotency, and dependency checks. Provide org-level configuration for auto-apply vs manual approval, and notify affected teams. Ensures configurations stay current, reducing denials and manual rework.

Acceptance Criteria
Auto-Apply with Semantic Versioning and Propagation
Given an org has Auto-Apply enabled and a validated payer rule change is available When the update job executes Then the system creates a new semantic version (major for breaking changes, minor for additive changes, patch for metadata) for the affected Portal Presets And applies the versioned changes to linked visit note templates, validation rules, export mappings, and scheduling constraints And the operation is idempotent such that re-executing the same change set produces no additional changes or versions And a changelog entry records version, rulefeed change ID, diff hash, author=RulePulse, and timestamp
Human-Readable Diff for Pending Update
Given a pending preset update exists When a user opens the diff view Then the system displays a human-readable diff with change-type labels (added, removed, modified) grouped by artifact (templates, validation rules, export mappings, scheduling constraints) And shows field-level before/after values and rationale from the rulefeed where available And allows filtering by artifact type and exporting the diff to PDF and CSV And the displayed diff matches the machine-readable change set exactly
Pre-Apply Impact Analysis with Sandbox Dry-Run
Given a pending update is available in the Test org When the user runs impact analysis Then the system reports counts and lists of affected templates, validation rules, export mappings, and scheduling constraints And executes a dry-run against the last 30 days of representative notes/exports in the Test org, reporting predicted validation errors and export contract mismatches And provides a go/no-go recommendation with explicit blockers and warnings And stores the analysis report with a unique ID linked to the update
Guardrails and Safety Checks
Given a pending update introduces breaking changes or new required fields When pre-apply checks run Then the system blocks apply if dependencies are missing (e.g., required export mapping not defined, required template field absent) and lists actionable blockers And prevents deletion of fields mandated by current payer rules And detects dependency cycles and prevents partial application And allows override only by Admin with an explicit typed reason, which is recorded immutably in the audit log
Manual Approval Workflow and Notifications
Given an org is configured for Manual Approval When a validated rule change is ingested Then the system notifies affected roles (Ops Manager, Compliance, Billing) via email and in-app within 15 minutes, including a summary and diff link And allows an authorized approver to open the change, run sandbox tests, and Approve or Reject And on approval, the approver selects or confirms the semantic version and schedules a go-live time And all actions record user, timestamp, decision, and comments in the audit log And if no decision is made within 72 hours, daily reminders are sent until resolved
Staged Deployment Gate to Test Before Production
Given an update is approved or eligible for Auto-Apply When deployment begins Then changes are first applied to the linked Test org and locked from Production And promotion to Production is enabled only after at least one successful export dry-run and full validation suite pass in Test org And if any Test org check fails, Production promotion remains blocked and owners are notified with failure details And the promotion action and results are logged with trace IDs
Rollback with Propagation, Audit, and Notifications
Given Production is on version X.Y.Z of a preset When an authorized user triggers rollback to a prior version Then all linked artifacts (visit note templates, validation rules, export mappings, scheduling constraints) revert atomically to the target version And the system generates and stores a rollback diff, updates the active version pointer, and records user, reason, and timestamp in the audit log And affected teams receive rollback notifications via email and in-app within 15 minutes, including impact summary And rollback is idempotent and transactional; if any artifact fails to revert, the system aborts and restores the pre-rollback state without partial changes
Sandbox Export Validation
"As a billing lead, I want to test exports against upcoming rules in a sandbox so that I can fix issues before payers enforce changes."
Description

Provide a safe sandbox where users can run sample and historical exports against upcoming rules before go-live. Simulate effective dates, payer/plan variations, and grace periods; generate detailed validation reports with pass/fail status, line-level errors, and suggested remediations; compare current vs next-rule outcomes; and track results over time. Support batch selection by date range, payer, and caregiver; no writes to production; parity with production export formats; and API/CLI hooks for CI-style checks. Enables proactive fixes and prevents surprise denials.

Acceptance Criteria
Run Sandbox Export With Upcoming Payer Rule Effective Date
Given the user selects the Sandbox environment And selects an upcoming RulePulse ruleset version for a specific payer and plan with an effective date And chooses a visit date range and a sample or historical data source When the user runs a sandbox export Then the system applies the selected upcoming ruleset as of the specified effective date And the generated export artifact matches production export format, schema, and naming conventions for the chosen export type And no writes, updates, or side effects occur in production systems, queues, or audit logs And a validation report is generated and linked to the run
Batch Selection by Date Range, Payer, and Caregiver
Given the user sets filters for date range, one or more payers/plans, and one or more caregivers When the user previews the batch Then the UI displays the count of visits to be included, segmented by payer and caregiver When the user runs the sandbox export Then only records matching all selected filters are included in the export and validation report And excluded records are not present in the export or counts
Validation Report with Pass/Fail, Line-Level Errors, and Remediations
Given a sandbox export has completed When the user opens the validation report Then the report shows overall pass/fail status and per-file pass/fail status And each failed line item includes row identifier, field name, rule reference, severity, and error message And each failed line item includes a suggested remediation with actionable steps or links And the report is downloadable as CSV and JSON and accessible via API by run ID
Current vs Next-Rule Outcome Comparison
Given the user enables comparison mode between current production rules and a selected upcoming ruleset When the sandbox export runs Then the system produces side-by-side results for current vs next rules including counts of passes, warnings, and failures And differences in output fields are highlighted at field level with before/after values And a summary shows net delta metrics and a list of impacted records And both export artifacts are available for download with clear labeling
Grace Period Simulation and Enforcement Modes
Given the user specifies a grace period start and end date and selects an enforcement mode (Warn-only or Enforce) When the sandbox export runs spanning dates inside and outside the grace period Then findings for rules within the grace period are marked as warnings in Warn-only mode and as failures in Enforce mode And findings outside the grace period respect full enforcement And the report labels each finding with the applicable enforcement state and effective/grace dates
API and CLI Hooks for CI-Style Checks
Given a valid API token or CLI credentials and parameters including ruleset version, filters, and comparison mode When the client calls the sandbox validation endpoint or executes the CLI command Then the system accepts the request and returns a run ID and status endpoint And upon completion the API/CLI returns machine-readable JSON including overall pass/fail, counts, errors, and diff summary And the CLI process exits with code 0 on overall pass and non-zero on overall fail And the API supports webhooks or polling to retrieve final results
Run History and Trend Tracking
Given sandbox runs have been executed over time When the user views the Sandbox History Then runs are listed with timestamp, initiator, parameters (filters, ruleset, comparison mode), and outcome And users can filter history by date range, payer, caregiver, and outcome And the system charts pass/fail trends and top recurring error categories over a selectable time window And each run is reproducible by re-running with the same parameters via a Re-run action
Change Alerts & Digest Preferences
"As a caregiver supervisor, I want concise alerts about payer rule changes so that my team knows what to do and by when."
Description

Deliver concise, actionable change alerts summarizing what changed, who is impacted, effective dates, required actions, and links to diffs and sandbox tests. Support in-app notifications, email, and mobile push; per-user preferences (frequency, channels, payers, severity); agency-level defaults; and digest mode to group related changes. Include acknowledgment tracking, snooze/defer, localization, WCAG-compliant templates, and rate limiting. Integrates with Audit Trail and opens Sandbox with preloaded context.

Acceptance Criteria
Multi-Channel Alert Delivery and Rate Limiting
Given a user has enabled in-app, email, and mobile push channels and verified contact information And per-user rate limits are configured to 3 real-time alerts per hour per channel And five qualifying payer rule changes occur within one hour When RulePulse generates alerts Then no more than 3 alerts are delivered via each enabled channel within that hour And the remaining alerts are queued for the next eligible window or included in the next digest according to the user’s preferences And in-app notifications appear within 5 seconds of generation; email and push are enqueued within 15 seconds
Alert Content Completeness and Actionability
Given a rule change is ingested that affects Payer X with effective date Y and requires action Z When an alert is generated for impacted users Then the alert includes: a concise summary of what changed, who is impacted (payer/program), effective date/time, required actions, a link to the change diff, and a deep link labeled “Open Sandbox Test” And all dates/times are shown in the recipient’s locale and time zone And the alert body clearly indicates whether user acknowledgment is required
Per-User Preferences and Agency-Level Default Overrides
Given agency-level defaults are set (frequency=daily digest at 6pm local, channels=email+in-app, payers=A,B, severities>=Major) And a user overrides to real-time mobile push for payer B with severity=Critical only When changes for payers A and B at severities Major and Critical are published Then the user receives real-time mobile push only for payer B Critical changes And for all other changes, the user follows agency defaults (daily digest via email+in-app) And disabled payers/severities for the user do not generate alerts And changes to preferences take effect within 2 minutes and are recorded in the Audit Trail
Digest Mode Grouping and Scheduling
Given a user has digest mode enabled with frequency=daily at 18:00 local and grouping by payer and severity And twelve eligible changes occur across three payers between 00:00 and 18:00 When the digest is delivered at 18:00 Then the digest groups changes by payer and severity with per-group counts and links to details/diffs And items included in the digest are marked as delivered and will not trigger separate real-time alerts And the digest counts as one send toward rate limits
Acknowledgment, Snooze/Defer, and Audit Trail Integration
Given an in-app alert requires acknowledgment When the user clicks Acknowledge Then the system records the user ID, timestamp, payer, change ID/version, and alert channel in the Audit Trail And the alert status updates to Acknowledged and is filterable by acknowledgment state Given a user selects Snooze for an alert and chooses 2 days or “until 24 hours before effective date” When the snooze is set Then the alert will not notify again until the selected time, at which point it re-queues for delivery or inclusion in the next digest And all snooze/unsnooze actions are logged in the Audit Trail
Sandbox Launch with Preloaded Context
Given an alert includes a “Open Sandbox Test” deep link When the user opens the link Then the Sandbox opens with the impacted payer preselected, relevant Portal Presets loaded, affected rules highlighted, and sample data scoped to the user’s agency And Sandbox runs read-only against production data sources and cannot modify live presets And the Sandbox session is logged in the Audit Trail with a backlink to the originating alert
Localization and Accessible Templates
Given a user’s language=Spanish (es-MX) and time zone=America/Mexico_City When in-app, email, and mobile push alerts are delivered Then subject, body, and action labels are localized to Spanish, and dates/numbers are formatted per locale And if a translation key is missing, content falls back to English and the missing key is logged for remediation And in-app and email templates meet WCAG 2.1 AA: color contrast ≥ 4.5:1, focus order is logical, ARIA labels are present, and images include alt text
Effective Date Orchestration & Overrides
"As an agency owner, I want to schedule when new rules take effect for my organization so that we can train staff and avoid disruption while staying compliant."
Description

Orchestrate the lifecycle of rules with effective/expiry dates, grace periods, and phased rollouts by payer, plan, region, and product line. Allow agencies to schedule adoption windows, set overrides (defer or accelerate) with reason capture and approval workflow, and preview conflicting rules. Provide calendar and timeline views, conflict resolution logic when overlapping rules exist, and guardrails to prevent noncompliant exports after grace. Coordinate with Auto-Update and Alerts to execute go-live safely.

Acceptance Criteria
Automatic Rule Activation by Scope and Dates
Given a rule with effective_date T and expiry_date E scoped to specific payers/plans/regions/product lines, When the current system time is >= T and < E and no agency override exists for that scope, Then the rule is applied to that scope’s production exports and presets are updated within 5 minutes. Given sub-scopes with distinct effective dates (phased rollout), When each sub-scope reaches its T, Then only that sub-scope activates while others remain unchanged. Given the current time >= E, Then the rule is deactivated for that scope and removed from the active set within 5 minutes. Given a scope not included in the rule’s scope, Then the rule never activates for that scope.
Grace Period Enforcement and Export Guardrails
Given a rule with grace_period_days G starting at effective_date T, When T <= now < T+G, Then exports that violate the rule are permitted but receive a "Grace Warning" with rule ID, required change, and deadline. When now >= T+G, Then exports that violate the rule are blocked, a clear error is shown with remediation steps, and submission is prevented. Given blocked export events, Then the system logs actor, timestamp, rule ID, scope, and payload reference in the audit log. Given a global admin attempts to bypass a post-grace block, Then the system disallows bypass and logs the attempt.
Override Scheduling with Reason Capture and Approvals
Given a user with "Agency Admin" role, When creating an override to defer or accelerate adoption for a specific scope, Then the system requires a target go-live date and a non-empty reason before submission. When an override is submitted, Then its status is "Pending Approval" and approvers in the configured workflow receive an alert. When an approver approves or rejects, Then the adoption schedule updates accordingly and an immutable audit record is created capturing submitter, approver, timestamps, old/new dates, reason, and any attachments. Given a requested defer date beyond the payer’s grace window, Then the system prevents submission and displays the maximum allowable date.
Overlapping Rules Preview and Resolution
Given two or more rules overlap on the same field and scope windows, When the user opens the conflict preview, Then the system lists conflicts and calculates the winner using precedence: more specific scope > later effective date > higher rule priority value; ties require manual resolution. When a conflict requires manual resolution, Then the user must choose the winning rule per field and save before go-live; otherwise go-live is blocked for that scope. When the user runs a sandbox export during a conflict, Then the preview shows side-by-side outcomes and clearly labels the winning rule per field.
Calendar and Timeline Views of Rule Lifecycles
When the user opens Rules Calendar/Timeline, Then effective, grace, adoption window, and expiry are visualized per rule and scope with distinct labels. Given filters for payer, plan, region, product line, and status, When applied, Then the view updates within 2 seconds on datasets up to 500 active rules. When the user clicks a rule bar, Then a detail panel opens showing rule metadata, scope, dates, override status, conflicts, and quick actions to open sandbox or conflict preview. All dates/times honor the agency’s configured timezone, and accessibility standards (keyboard navigation and screen reader labels) are met.
Safe Go-Live Coordination with Alerts and Auto-Update
Given a rule scheduled to go-live in 7 days for a scope, Then the system sends a change alert with a concise summary and link to sandbox; a reminder is sent 24 hours prior. When go-live is executed (payer effective date or approved override date), Then Auto-Update applies Portal Presets atomically for that scope; success/failure is logged; on failure, the system rolls back and notifies admins. Given sandbox mode, Then users can run test exports using upcoming rules until go-live; after go-live, sandbox reflects the live rule set and labels the change as "Live".
Compliance Audit Trail & Reporting
"As a compliance officer, I want an audit trail and exportable reports of rule changes and acknowledgments so that I can respond to audits and reduce denial risk."
Description

Create an immutable, queryable audit trail capturing detected rule changes, approvals, preset versions, auto-update actions, user acknowledgments, sandbox test runs and outcomes, overrides, and go-live events. Provide time-stamped, actor-attributed records with diffs and artifacts; exportable audit-ready reports (PDF/CSV) by payer, plan, date range, and organization; and APIs/webhooks for external compliance systems. Enforce retention policies, encryption at rest/in transit, and role-based access. Demonstrates diligence and speeds payer audits.

Acceptance Criteria
Immutable Rule Change Event Logged
Given a payer rule change is detected by RulePulse, When ingestion completes, Then an audit event is created with fields: event_id (UUID v4), event_type='rule_change_detected', payer_id, plan_id, org_id (nullable), detected_at_utc (ISO 8601), actor='system:RulePulse', ruleset_version_before, ruleset_version_after, diff_patch (JSON), artifacts_uri[], checksum_sha256. Given the audit event exists, When any user (including admins) attempts to update or delete it via UI or API, Then the system denies the action with HTTP 403, logs an 'immutable_write_blocked' event referencing the original event_id, and the original record's checksum remains unchanged. Given PHI redaction is requested by a compliance role, When executed, Then the system creates a new 'redaction_applied' event that tokenizes allowed fields, links parent_event_id to the immutable source, and leaves the source unmodified; both events remain encrypted at rest.
Query and Filter Audit Trail
Given an auditor provides filters (payer_id, plan_id, org_id, event_type list, actor, UTC date_range), When a query is submitted via UI or GET /audit/events, Then only matching events are returned with total_count, page_size (<=1000), and next_page_cursor. Given a dataset with <= 50,000 matching records, When the query runs, Then p95 response time <= 2 seconds and results are sorted by event_timestamp desc by default. Given a user timezone preference is set, When results are displayed or exported, Then timestamps are shown in the selected local timezone with UTC offset while stored/transmitted in UTC.
Audit-Ready Report Exports (PDF/CSV)
Given filters are applied to the audit trail, When the user exports to CSV or PDF, Then the file is generated within 60 seconds for up to 100,000 records and delivered via a pre-signed URL that expires in 15 minutes. Then both formats include a report header (prepared_for_org, prepared_by_user, prepared_at_utc, filters_summary, record_count, file_sha256) and standardized row columns (event_id, event_type, payer_id, plan_id, org_id, actor, timestamp_utc, ruleset_version_before, ruleset_version_after, outcome, artifacts_link). Then CSV conforms to RFC 4180; PDF is text-searchable and tagged for accessibility; exports are encrypted at rest and transmitted over TLS 1.2+.
Sandbox Test Runs Logged with Outcomes
Given a user executes a sandbox export against new rules, When the run completes, Then an audit event 'sandbox_test_run' is recorded with test_run_id, user_id, org_id, ruleset_version, input_sample_ids, started_at_utc, ended_at_utc, outcome_status (pass|fail|warnings), failing_rule_codes[], summary_metrics, and links to artifacts (sample output, validation logs). Given a go-live is initiated, When the selected ruleset has no 'sandbox_test_run' with outcome_status=pass within the last 14 days for that payer/plan/org, Then the go-live action is blocked with an explanatory error unless a compliance override is provided and logged. Given the same user reruns a sandbox test, When differences exist, Then a diff artifact between runs is attached and referenced in the new event.
External Compliance APIs and Webhooks
Given an external system registers a webhook endpoint and secret, When subscribed audit events occur, Then HMAC-SHA256 signed JSON payloads including idempotency_key, event_id, event_type, occurred_at_utc, and resource URIs are sent with at-least-once delivery and exponential backoff retries for up to 24 hours. Given an API client with OAuth2 client_credentials and scope audit:read, When it calls GET /audit/events with filters and pagination, Then the API returns 200 with results as specified; insufficient scope yields 403, missing/invalid token yields 401; rate limit 600 requests/min is enforced with Retry-After on 429. Given webhook delivery permanently fails (e.g., HTTP 410), When max retries are exhausted, Then a 'webhook_delivery_failed' event is recorded and the subscription is marked inactive; admins are notified.
Retention Policy Enforcement and Legal Hold
Given the default retention policy is 7 years (configurable per org to 3–10 years), When an event exceeds its retention period and is not under legal hold, Then it is purged within 30 days by a scheduled job and a 'record_purged' summary event is logged. Given a legal hold is applied to a payer/plan/org or date range, When retention is evaluated, Then affected records are excluded from purge until the hold is removed; hold application and removal are logged with actor and reason. Given a disaster recovery restore, When the system returns to service, Then event immutability, checksums, and retention/hold flags are preserved; a 'dr_restore_completed' audit event is recorded.
Role-Based Access, Encryption, and Access Logging
Given system roles (Caregiver, Ops Manager, Compliance Officer, Admin, API Client), When users request audit data, Then only Compliance Officer and Admin can view full details; Ops Manager sees summary without PHI; Caregiver has no access; API Client is constrained by scopes; enforcement is consistent across UI and API. Given a user initiates an export download, When the pre-signed URL is requested, Then the user must have an active MFA session and the download is logged with user_id, ip_address, user_agent, timestamp_utc, and file_sha256. Given data security requirements, When storing and serving audit events and exports, Then AES-256 encryption at rest and TLS 1.2+ in transit are enforced; KMS-backed keys are rotated at least every 12 months; any attempt to disable encryption is blocked and logged.

PlainSpeak Digest

Auto‑converts clinical notes and vitals into a short, friendly update after each visit, using lay terms, tone‑aware phrasing, and a simple status badge (Stable, Improving, Needs Attention). Highlights what was done, what changed, and anything to watch, while filtering out jargon and non‑shareable fields. Families grasp the essentials in seconds, reducing reassurance calls and confusion.

Requirements

Secure Data Filtering & Redaction Rules
"As an operations manager, I want digests to automatically exclude non-shareable content based on policy and consent so that family updates are compliant and protect patient privacy."
Description

Implements a configurable filtering engine that ingests clinical notes, vitals, voice-clip transcriptions, IoT sensor readings, and visit metadata, then automatically removes or redacts non-shareable items (e.g., internal staff remarks, billing codes, sensitive diagnoses, free-text that may include PHI) before family distribution. Enforces consent and privacy policies at the agency, patient, and recipient levels, applying the minimum necessary principle and honoring exemptions. Provides an admin UI to map source fields to shareable categories, define redaction patterns, and test sample outputs. Integrates with CarePulse’s visit completion workflow so the digest is generated only after documentation is signed. Produces a sanitized payload for downstream summarization while logging decisions for auditability.

Acceptance Criteria
Signature Gate for Digest Generation
Given a visit's documentation is unsigned, When the caregiver completes the visit workflow, Then no digest payload is generated or queued; And an event "digest_blocked_unsigned" is logged with visit_id and user_id. Given the documentation transitions to Signed status, When the digest job runs, Then exactly one sanitized payload is generated and marked idempotent for that visit_id; And duplicate payloads are not produced on retries. When a signed visit is later amended and re-signed, Then a new sanitized payload is generated with a new payload_version and the previous one is superseded but retained in audit logs.
Automatic Redaction of Non-Shareable Fields Across Sources
Given admin config marks internal staff remarks, billing codes, sensitive diagnoses, and non-shareable metadata as "Do Not Share", When the filtering engine processes notes, vitals, voice transcripts, IoT readings, and visit metadata, Then those fields are omitted or replaced with "[REDACTED]" while preserving structural placeholders and redaction flags. When free-text contains PHI/PII patterns (phone numbers, addresses, emails, SSN/MRN patterns, dates of birth), Then they are redacted per configured patterns; And a redaction_count is incremented. Then unit and integration tests on the provided fixture set report 0 leaks of marked non-shareable fields and 100% recall on the PHI patterns in the fixture set.
Consent Policy Enforcement with Exemptions
Given agency-level defaults, patient-level overrides, and recipient-level restrictions/exemptions exist, When generating a payload for a specific recipient, Then only items permitted by the effective policy intersection are included; Exempted items are removed with reason "exemption" in the decision log. When no valid consent exists for a recipient, Then payload generation is aborted; No payload is stored or transmitted; And "consent_missing" is logged with visit_id and recipient_id. When a recipient has an approved exemption to receive sensitive diagnosis summaries, Then those summaries are included only for that recipient; All other recipients continue to have these items redacted.
Admin UI: Field Mapping, Validation, and Test Console
Given an admin maps source fields to shareable categories and defines redaction regex/patterns, When clicking Validate, Then invalid regex, unknown field paths, or unsafe mappings (e.g., exposing internal staff remarks) are blocked with inline errors; Save is disabled until all errors are resolved. When clicking Save on a valid configuration, Then the configuration is persisted with config_version, author, timestamp, and change summary; And it becomes the active configuration used by the filtering engine. When running Test with sample input, Then a sanitized preview renders within 2 seconds for a 100 KB sample; Redacted tokens are visually marked with reasons; The preview displays counts of omitted and redacted items. When restoring a prior config_version, Then the previous settings become active immediately and the change is recorded in the audit trail.
Sanitized Payload Contract for Summarization
Given a signed visit, When the filtering engine emits the sanitized payload, Then it conforms to JSON schema "sanitized_payload_v1" with required fields: visit_id, patient_token, caregiver_id, signed_at, items[], redaction_summary, config_version. Then patient_token is a stable pseudonymous identifier with no direct identifiers; caregiver_id is included only if policy permits; Otherwise it is replaced with a role label. Then payload size for the 95th percentile visit is <= 200 KB; For larger payloads, items are truncated per policy with reason "size_limit" and counts recorded. When schema validation fails, Then the job fails fast; No payload is sent downstream; An alert with error details is emitted to the monitoring channel.
Audit Logging and Deterministic Reproducibility
Given any permit/redact/drop decision, When the engine processes an item, Then an audit log entry is written containing visit_id, recipient_id, rule_id, action, field_path, content_hash (SHA-256 of original content), timestamp, actor, and config_version. Then audit logs are immutable (append-only) and retained for 7 years; Access requires admin role; Logs are searchable by visit_id, recipient_id, time range, and rule_id. When reprocessing the same input with the same config_version, Then the outcomes are deterministic and match prior decisions, verified by comparing decision hash digests.
Lay Terms Translation & Tone Control
"As a family member, I want visit updates written in clear, friendly language so that I can quickly understand my loved one’s status without medical training."
Description

Transforms sanitized clinical inputs into plain-language, friendly text at a target reading level (e.g., Grade 6–8), replacing jargon with lay equivalents via a curated glossary and rule-based/ML paraphrasing. Supports tone profiles (reassuring, neutral, urgent-aware) that adapt phrasing without minimizing risk. Includes length controls, simple section headers, and bullet-style highlights for mobile readability. Provides guardrails to avoid medical advice, include appropriate disclaimers, and fall back to safe templates when confidence is low. Integrates with CarePulse’s content services for model versioning, feature flags, and QA review hooks.

Acceptance Criteria
Reading Level & Jargon Replacement
Given sanitized clinical notes and vitals containing clinical jargon and abbreviations When the translation service generates the PlainSpeak text with target reading level Grade 6–8 Then all jargon and abbreviations are replaced with approved lay equivalents from the glossary, with zero unrecognized jargon remaining And the FKGL reading score is <= 8.0 And no sentence exceeds 25 words and average sentence length is <= 18 words And only abbreviations from the allowed list (e.g., BP, HR) remain And the output includes metadata capturing glossary_version and readability_score
Tone Profiles: Reassuring, Neutral, Urgent‑Aware
Given sanitized inputs and a selected tone profile (reassuring, neutral, urgent-aware) When the system generates the PlainSpeak text Then the output conforms to the tone’s lexicon and phrasing rules with tone_classifier_confidence >= 0.80 And the reassuring tone uses calm, supportive phrasing and avoids alarmist language And the neutral tone uses factual, concise statements and avoids emotive language And the urgent-aware tone clearly flags concerns using conditional guidance (e.g., “please contact the care team if…”) without prescribing treatment And exclamation marks count = 0 and ALL-CAPS words count = 0 And tone profile used is recorded in metadata
Mobile-Friendly Structure & Length Controls
Given sanitized inputs ready for family-facing output When the digest is generated Then the output contains section headers in this order: “What we did”, “What changed”, “What to watch” And each section contains 1–5 bullet points And no bullet exceeds 150 characters And total character count of the digest is <= 800 characters And formatting preserves bullets and headers on iOS and Android rich-text targets (render check passes) And no URLs or inline code blocks are present
Guardrails: No Medical Advice, Disclaimers, and Privacy Safety
Given sanitized inputs and generated PlainSpeak text When policy checks run Then the output contains the standard disclaimer: “This update is for information only and does not include medical advice. For concerns, contact the care team.” And zero violations are detected for medical advice, dosing, diagnosis/prognosis directives, or prescriptive instructions (policy_violation_count = 0) And zero PHI beyond allowed shareable fields is present (no patient full name, exact birthdate, full address, phone/email) And no coded terms (ICD/CPT) appear And prohibited modal verbs for directives (e.g., “must take”, “start/stop medication”) count = 0
Low‑Confidence Safe Fallback Template
Given the generation confidence < 0.75 or glossary coverage < 100% or any policy violation is detected When producing the family-facing output Then the system emits a safe, template-based message with neutral tone and the three required section headers And placeholders are filled only with non-sensitive, high-confidence facts And a reason_code explains the fallback (e.g., LOW_CONFIDENCE, GLOSSARY_GAP, POLICY_FLAG) And review_required = true when QA review is enabled And no partial or mixed (template + freeform) outputs are sent to end users
Content Services Integration: Versioning, Flags, QA Review Hooks
Given content services configuration with model versions, feature flags, and QA settings When a digest is requested Then the response metadata includes model_version, glossary_version, policy_version, and content_id And if feature flag plainspeak_translation is OFF, the service returns status=DISABLED and does not generate text And if qa_review=true, the digest is stored as DRAFT, assigned a review_id, and is not delivered until approved And all generation events and decisions (including fallbacks) are logged to the audit stream with timestamp and actor/service id
Status Badge Scoring Engine
"As a clinician, I want a transparent status badge that reflects today’s findings and trends so that families get an accurate, concise signal of condition changes."
Description

Calculates a visit-level status badge (Stable, Improving, Needs Attention) using configurable rules that consider current vitals, short-term trends, adherence to care plan tasks, symptom changes, and clinician flags. Supports patient-specific baselines and alert thresholds, clinician overrides with required rationale, and an explanation snippet (e.g., "Improving: blood pressure trending toward baseline"). Exposes scores and inputs for audit and debugging. Runs synchronously at visit close-out and asynchronously if late-arriving IoT data changes the assessment, triggering digest updates when permitted.

Acceptance Criteria
Badge Calculation with Patient Baselines and Thresholds
Given patient-specific baseline vitals and alert thresholds are configured And the visit has current vitals, care plan task completion, symptom changes, and clinician flags captured When the scoring engine runs at visit close-out Then it computes a single badge in {Stable, Improving, Needs Attention} using the active rule set And includes a human-readable explanation snippet citing the primary drivers And persists badge, score components, explanation, rule version, and timestamp to the visit record And exposes the score and inputs via the audit/debug interface
Short-Term Trend Influences Badge
Given the last 72 hours of vitals and the previous 3 visits are available (or as configured) When the trend toward baseline exceeds the configured improvement threshold and no hard alert threshold is breached Then the engine returns Improving When the trend away from baseline exceeds the configured deterioration threshold or any hard alert threshold is breached Then the engine returns Needs Attention When deltas are within the neutral band and no alerts are present Then the engine returns Stable And thresholds are configurable at org and patient level with the effective values recorded in audit output
Clinician Override of Badge
Given a computed badge is present for the visit When a clinician chooses to override the badge Then the UI requires selecting {Stable, Improving, Needs Attention} and entering a non-empty rationale (min 10 characters) And records clinician id, timestamp, original badge, new badge, and rationale in the visit audit trail And updates the explanation snippet to reflect the override And blocks save if the rationale is missing or below the minimum length And allows authorized users to revert to the computed badge with a corresponding audit entry
Async Recalculation on Late IoT Data
Given new IoT vitals arrive after visit close-out When the new data alters inputs beyond configured sensitivity thresholds Then the engine re-evaluates asynchronously within 2 minutes of data receipt And if the badge changes, it updates the visit record, versions the previous result, and triggers a digest update when sharing is permitted And if the badge does not change, no digest update is sent And all recalculations and notifications are logged with correlation ids
Explanation Respects Sharing Rules
Given the explanation snippet is generated for external sharing When any driver is marked non-shareable or contains disallowed PHI Then the family-facing snippet excludes those details and uses approved lay terms And the internal audit/debug view retains full detail with field-level provenance And tone-aware phrasing matches the selected audience profile per configuration
Performance and Determinism at Close-Out
Given a standard visit close-out on a supported mobile device and network When the scoring engine executes synchronously Then median execution time is ≤ 150 ms and p95 ≤ 400 ms for visits with ≤ 10 vitals and ≤ 20 tasks And results are deterministic: re-running with identical inputs yields identical outputs And if required inputs are missing, the engine applies patient baselines and default rules, flags assumptions in the explanation, and returns a badge without crashing And failures return a structured error with a trace id and do not block visit submission
Auditability and Rule Versioning
Given a badge is computed or overridden Then the system stores: inputs (vitals values/timestamps, trend stats, adherence %, symptom deltas, clinician flags), rule set id/version, rule path taken, computed score, final badge, explanation, user ids for overrides, and timestamps And an authorized auditor can retrieve the record via API with filters for patient, visit, and date range And audit records are immutable and retained for at least 7 years And rule changes are versioned and effective-dated; computations record and use the effective version at time of run
Actions, Changes, and Watch Items Extractor
"As a caregiver finishing a visit, I want the digest to clearly highlight actions taken and notable changes so that families understand progress and what to monitor."
Description

Identifies and summarizes what was done (completed tasks, meds administered, exercises), what changed (vital deltas, symptom updates, mobility/behavior notes), and what to watch (early warning signs) from structured fields, voice-clip notes, and sensor events. De-duplicates across sources, ranks by clinical significance, and formats outputs as short, mobile-friendly bullets. Incorporates visit context (date/time, caregiver, route) and links to relevant care plan goals. Provides clinician review/edit surfaces prior to release when agency policy requires human-in-the-loop.

Acceptance Criteria
Post-Visit Multi-Source Extraction and Summarization
Given a completed visit with structured fields (tasks, meds, vitals), a transcribed voice clip, and sensor events available And an agency sharing policy that marks certain fields as non-shareable When the extractor runs for that visit Then it outputs three sections labeled "What was done", "What changed", and "What to watch" if applicable And each section contains 1 to 5 bullets, capped at 5, ordered by significance And each bullet is <= 160 characters and uses mapped lay terms And any fields marked non-shareable are excluded And the extraction job completes within 10 seconds for visits with up to 200 events/notes
Cross-Source De-duplication and Merge Logic
Given the same action or observation appears in multiple sources (e.g., structured task completion and voice note) When the extractor assembles bullets Then duplicates are merged into a single bullet And the merged bullet preserves the most precise quantitative value and most recent timestamp And no two bullets in the same section describe the same event/entity And the output includes at most one bullet per unique task/medication/exercise per visit unless outcomes differ materially
Clinical Significance Ranking and Section Ordering
Given agency-configured significance rules and thresholds for vitals deltas, symptoms, and events When scoring extracted candidates Then items with threshold-exceeding abnormalities outrank routine actions and minor notes And ties are broken by recency, then source priority (sensor > structured > voice) And each section presents the top N (up to 5) by descending score And the digest emphasizes "What to watch" above other sections when any item is critical
Visit Context and Care Plan Goal Linking
Given visit metadata (date/time, caregiver, route) and mapped care plan goals When generating the digest Then the header shows visit date/time, caregiver name, and route identifier And each bullet that aligns to a goal includes a tappable link to that goal (ID and short title) And bullets without a mapped goal show no link And missing metadata results in a clear placeholder without blocking output
Human-in-the-Loop Review and Release Control
Given agency policy requires human review before release When extraction completes Then the clinician review screen displays all sections and bullets with edit, reorder, add, and delete controls And the digest cannot be released until an authorized user approves And all edits are captured in an audit trail with user ID, timestamp, and change summary And if policy does not require review, the digest auto-releases immediately after extraction
Mobile-Friendly Bullet Formatting and Readability
Given the digest is intended for mobile viewing by families When bullets are generated Then each bullet is written at or below 8th-grade reading level (Flesch-Kincaid <= 8.0) And abbreviations and clinical jargon are replaced by lay terms using the approved dictionary; unrecognized terms are omitted And bullets use sentence case, no semicolons, and avoid passive voice And bullets are trimmed to <= 160 characters without truncating critical qualifiers (e.g., dose, unit, direction)
Early Warning 'Watch' Detection from Vitals and Sensors
Given vitals deltas and sensor anomalies are available for the visit And watch rules define thresholds (e.g., SpO2 drop >= 4 points, HR change >= 20%, fall detection) When evaluating candidates for "What to watch" Then qualifying items are included with a concise reason (e.g., "SpO2 dropped 5 pts since yesterday") And time context (today vs last visit) is stated when available And if no items qualify, the "What to watch" section is omitted
Delivery Preferences & Consent-driven Distribution
"As a care coordinator, I want digests delivered via each family’s preferred channel with proper consent so that updates are convenient and compliant."
Description

Manages recipient lists, consent records, and per-recipient delivery preferences (email, SMS link, family portal push). Enforces opt-in/opt-out, quiet hours, throttling, and rate limits. Supports language preference selection per recipient and patient, and sends previews to clinicians when required. Tracks delivery status, bounces, opens (where permissible), and auto-retries with channel fallback. Integrates with CarePulse identity, patient profiles, and compliance settings to ensure only authorized recipients receive the digest.

Acceptance Criteria
Consent Opt-In/Out Enforcement
- Given recipient A lacks recorded consent for patient P, When a visit digest is generated, Then recipient A is excluded from distribution and the audit log records reason "No consent" with timestamp. - Given recipient B provides explicit opt-in consent (channel, scope, language, timestamp), When a digest is generated, Then B is eligible for delivery only via enabled channels within that scope. - Given recipient C revokes consent, When the next digest is generated, Then C receives no messages, any queued messages to C are canceled, and the consent ledger records revocation (timestamp, actor, method). - Given a consent record has an expiration date, When that date passes, Then delivery attempts are blocked until renewed and the event is logged as "Consent expired". - Given an SMS reply of STOP/UNSUBSCRIBE is received from recipient R, When future digests are produced, Then the SMS channel is disabled for R and the preference/consent state reflects the opt-out.
Authorized Recipient Resolution
- Given CarePulse identity and patient P relationships, When a distribution list is compiled, Then only recipients with an active authorized role (e.g., legal guardian, proxy, designated caregiver) permitted by compliance settings are included. - Given a recipient’s authorization is removed or suspended, When the next distribution runs, Then they are excluded and a security/audit event is recorded with reason and timestamp. - Given staff are on the care team but not configured as family recipients, When digests run, Then staff do not receive family-facing digests unless explicitly permitted by compliance settings and added to the recipient list. - Given cached recipient lists exist, When a send is triggered, Then authorization is re-validated per recipient at dispatch time; any failures prevent send and are logged. - Given the patient’s authorized contacts are updated, When a subsequent digest is generated, Then the new list is reflected immediately without requiring system restart.
Delivery Preferences, Quiet Hours, and Rate Limits
- Given recipient R has preferences email=on, sms=off, push=on, When distributing a digest, Then only email and push are used. - Given quiet hours are set for R (e.g., 21:00–07:00 recipient timezone), When a digest is ready at 22:15 local time, Then delivery is deferred until quiet hours end unless the digest is flagged "Needs Attention" and policy allows override; the decision is logged. - Given configured account rate limits and per-recipient throttle windows, When multiple digests are queued, Then sends respect both; excess are queued and processed FIFO without dropping messages. - Given duplicate digests for the same visit and recipient are generated, When dispatching, Then only one message is sent due to idempotency keys and duplicates are marked "Skipped: Duplicate". - Given R updates delivery preferences, When the next digest is sent, Then the updated preferences take effect immediately and appear in the audit trail.
Language Preferences and Localization
- Given patient P language=Spanish and recipient R language=English, When generating the digest for R, Then content (including badge labels) is produced in English with lay terms. - Given a recipient’s preferred language is unsupported, When generating the digest, Then the system falls back to patient language; if unsupported, to agency default, and the fallback choice is logged. - Given recipient R prefers Spanish, When rendering dates, times, numbers, and units, Then formats follow the selected locale (e.g., 24h/12h, decimal separators) and pass localization tests. - Given multiple recipients with different language preferences, When sending for a single visit, Then each recipient receives a version localized to their preference with no cross-language leakage. - Given a clinician preview is shown, When the target recipient language differs from clinician’s, Then the preview indicates the target language and shows a machine translation note if applicable.
Clinician Preview Required Before Send
- Given compliance setting "Preview required" is enabled for patient P, When a digest is produced, Then distribution is held and a preview notification is sent to the assigned clinician(s). - Given the clinician approves the preview within the configured window, When approval is recorded, Then distribution proceeds respecting recipient preferences, quiet hours, and rate limits. - Given the clinician rejects the preview, When a rejection reason is submitted, Then no messages are sent, the reason is logged, and the digest is flagged for revision/regeneration. - Given the preview window expires without action, When the timer elapses, Then the send is canceled and an escalation notification is sent per configuration; the event is logged. - Given "Preview required" is disabled, When a digest is generated, Then messages are sent immediately subject to consent, authorization, and delivery rules.
Delivery Tracking, Auto-Retry, and Channel Fallback
- Given an email bounces (hard), When bounce feedback is received, Then email is marked undeliverable for the recipient, an alert is recorded, and the system retries via the next available permitted channel. - Given an SMS send fails with a transient carrier error, When retry policy applies, Then the system retries up to the configured maximum with backoff; on final failure, allowed channel fallback is attempted and logged. - Given a push token is expired, When a push attempt is made, Then the token is invalidated, the user is prompted to re-authenticate on next portal login, and fallback to email/SMS link is attempted if permitted. - Given a message is delivered, When opens are permissible and supported by channel, Then opens are tracked; if not permissible, no open tracking is embedded. - Given a digest has already been delivered to a recipient via any channel, When fallback is considered, Then idempotency prevents duplicate delivery and the decision is logged.
Compliance Filtering and Secure Content Packaging
- Given visit notes include fields marked internal-only or outside consent scope, When generating digest content, Then such fields are excluded from all outbound payloads. - Given SMS is the selected channel, When composing the message, Then no PHI appears in the SMS body; a secure expiring link to the portal is provided instead. - Given email is the selected channel, When composing subject and preview text, Then they contain no sensitive PHI; detailed content is placed in the secure body or behind a secure link per policy. - Given recipient R consent scope is limited (e.g., status badge + activities performed), When sending the digest, Then only permitted sections are included for R. - Given delivery and audit logging, When events are recorded, Then logs store metadata only (timestamps, recipient ID, channel, status) and exclude message bodies except where encrypted audit storage is explicitly enabled by compliance settings.
Audit Trail & Visit Attachment
"As a compliance officer, I want a complete, immutable record of each digest and its distribution so that audits can be satisfied quickly and confidently."
Description

Creates an immutable audit log for each digest generation and delivery, including inputs used (field list, timestamps), applied redactions, model/glossary versions, badge score rationale, clinician overrides, recipients, and delivery outcomes. Attaches a read-only snapshot of the digest to the visit record for future reference and one-click inclusion in CarePulse’s compliance reports. Supports export to CSV/PDF and secure sharing for audits, with retention aligned to agency policy.

Acceptance Criteria
Immutable Audit Log Capture on Digest Generation
Given a visit has a completed chart and a PlainSpeak Digest is generated When the digest is finalized Then the system writes an append-only audit record containing: audit_id, visit_id, digest_version, generation_timestamp_utc, input_field_list_with_source_timestamps, model_version_id, glossary_version_id, badge_label, badge_numeric_score, badge_rationale_text, redactions_list(field_name, reason, rule_id), system_timezone, generator_node_id And the audit record includes a content_hash computed over the stored fields And any attempt to update or delete the record is rejected and logged as a separate security event
Read-Only Digest Snapshot Attached to Visit Record
Given a digest is generated for a visit When the visit record is viewed Then a read-only snapshot artifact (exact digest text, badge, timestamp) is attached and visible And attempts by any role to edit the snapshot content are blocked with an error and logged And regenerating the digest creates a new snapshot version while preserving prior versions with version numbers and timestamps And an "Include in Compliance Report" action attaches the selected snapshot version to the report package with a reference back to its audit_id
Delivery Outcome and Recipient Logging
Given the digest is delivered to recipients When a send attempt is made via any channel Then the audit log records for each recipient: recipient_id_or_contact, channel, send_timestamp_utc, provider_message_id, delivery_status (queued|sent|delivered|failed), failure_reason_code_if_any, final_status_timestamp_utc And resends create additional delivery entries linked to the same digest_version And delivery outcomes are queryable by visit_id and by recipient within the CarePulse UI and exports
Export to CSV/PDF and Secure Audit Sharing
Given an authorized user selects "Export Audit" When CSV is generated Then the file includes one row per digest_version with columns: audit_id, visit_id, patient_id_masked, caregiver_id, generation_timestamp_utc, model_version_id, glossary_version_id, badge_label, badge_numeric_score, redaction_count, clinician_override_flag, recipients_count, last_delivery_status, retention_policy_id, content_hash And when PDF is generated Then it includes the human-readable digest snapshot plus a metadata appendix with the fields above And each export is accompanied by a SHA-256 checksum file And when "Secure Share" is used Then a time-bound, signed URL is created with configurable expiry, optional passcode, single-tenant domain, and can be revoked; all access/download events are logged in the audit trail
Redaction and Non-Shareable Field Logging
Given agency policy defines non-shareable fields and redaction rules When a digest is generated Then non-shareable fields are excluded from the snapshot content And the audit log records each applied redaction with field_name, rule_id, policy_version_id, and reason And manual clinician redactions are captured with user_id, timestamp_utc, and a masked diff of removed content And exports and secure shares never include the original redacted values
Retention Policy Enforcement
Given the agency retention policy is configured When an audit record or snapshot reaches its retention threshold Then the system schedules and executes purge, logging a deletion event with audit_id, deletion_timestamp_utc, policy_version_id, and operator (system) And records on legal hold are exempt until the hold is released and this exemption is logged And expired secure-share links are invalidated automatically and attempts to access them return 410 and are logged And backup/replica purge is completed within the policy-defined window and is attestable via a retention compliance report
Versioning and Override Traceability
Given a clinician overrides the badge or edits allowed digest text prior to delivery When the override is saved Then a new digest_version is created and the audit log records override_user_id, override_timestamp_utc, fields_changed, previous_values_masked_as_needed, new_values, and override_reason_note And the snapshot displays a "Clinician-edited" indicator with the override_user_id and timestamp And badge score rationale stored for each version includes contributing_signals list and weights And a "Revert to Previous Version" action creates another version that restores the prior snapshot and logs the reversion linkage
Localization & Accessibility
"As a family member with accessibility needs or a preferred language, I want updates I can read or hear comfortably so that I can stay informed without barriers."
Description

Provides multilingual digest generation and UI localization for common agency languages, with right-to-left support where needed. Ensures accessibility with screen-reader-friendly structure, color-blind-safe status badge palette, high-contrast mode, readable font sizes on mobile, and alt text for icons. Offers a read-aloud option and downloadable plain-text versions. Maintains glossary entries per language to preserve lay meaning and tone. Configurable per patient and per recipient with safe fallbacks when a requested language is unavailable.

Acceptance Criteria
Per-Patient and Per-Recipient Language Configuration with Safe Fallback
Given a patient default language L1 and a recipient preference L2 that is supported, When a visit completes, Then the recipient receives the digest and localized UI in L2. Given a recipient preference L3 that is not supported, When a visit completes, Then the digest is generated in L1 (patient default), and an audit log records recipient_id, requested_language=L3, fallback_language=L1, timestamp. Given no patient default language is set, When a visit completes, Then the digest is generated in the system default (en-US) and the fallback is logged with patient_id and recipient_id. Given multiple recipients with different language preferences, When digests are sent, Then each recipient receives the digest in their own selected language without cross-language leakage. Given a user updates the patient or recipient language, When the next digest is generated, Then the new language is applied without requiring app restart.
Right-to-Left Layout and Mirroring for RTL Languages
Given an RTL language (e.g., ar, he) is selected, When rendering the digest and UI, Then document direction is set to rtl, text aligns appropriately, and directional icons and navigation are mirrored. Then status badges and chips follow RTL reading order; numbers in vitals remain LTR using bidi controls where appropriate. Then truncation, wrapping, and punctuation spacing are correct for RTL across mobile widths 320–414 px. Given switching between LTR and RTL digests, When navigating, Then layout direction updates within 200 ms without visual artifacts or overlap.
Screen Reader Semantics and Alt Text Coverage
Given a screen reader is active (TalkBack/VoiceOver), When reading a digest, Then headings, sections (What was done, Changes, Watch outs), and the status badge expose accessible names/roles with a logical reading order matching the visual order. Then the status badge announces as "Status: <localized state>" plus a short description; color alone is not required to convey state. Then all actionable controls (language switcher, download, read-aloud) have accessible names and hit areas ≥ 44x44 px, and are reachable via keyboard/gestures. Then decorative icons are hidden from the accessibility tree; informative icons have localized alt text. Given automated audits (axe), Then no critical or serious accessibility violations are reported on the digest view.
Color-Blind-Safe Status Badges and High-Contrast Mode
Given the standard theme, Then each status badge (Stable, Improving, Needs Attention) meets WCAG 2.1 AA: text contrast ≥ 4.5:1; non-text contrast ≥ 3:1; and includes icon + label + shape so state is discernible without color. Given simulations for deuteranopia, protanopia, and tritanopia, Then badges remain distinguishable in ≥ 90% of automated checks and pass manual verification. Given high-contrast mode is enabled, When viewing the digest, Then all UI elements meet contrast ≥ 7:1 and focus indicators are ≥ 3:1; no content loss, overlap, or clipping occurs. Then body text on mobile renders at a minimum of 16 px (or OS equivalent), respects system text-size settings, and avoids truncation of critical content.
Read-Aloud (TTS) Delivery in Selected Language
Given a user taps Read Aloud on a digest localized to language L, When playback starts, Then TTS uses a voice for L (or closest available variant) and correct locale settings for numbers/units. Then playback controls (play/pause/seek) are operable via touch and screen reader; state changes are announced; focus is not trapped. Given TTS for L is unavailable, When starting read-aloud, Then the system falls back to the patient default language voice, displays a non-blocking notice, and logs requested_language and fallback_voice. Then the spoken order matches visible order and excludes non-shareable fields.
Downloadable Plain-Text Digest Export
Given a digest is generated, When the user taps Download Plain Text, Then a UTF-8 .txt file downloads within 2 seconds on a 3G connection, named CarePulse_Digest_<patient_alias>_<YYYYMMDD_HHMM>_<lang>.txt, size ≤ 200 KB. Then the file includes only shareable sections and redacts PHI beyond the patient alias per policy; contains no markup; lines wrap at ≤ 80 characters. Given the digest language is RTL, Then the file includes Unicode directionality marks to preserve reading order in common editors. Given offline mode, When the action is invoked, Then the export is queued and saved once connectivity is restored, preserving timestamp and language metadata.
Per-Language Lay Glossary Application and Safeguards
Given a language L is selected, When generating the digest, Then the per-language lay glossary is applied to replace/define clinical terms and enforce tone rules, achieving ≥ 95% coverage of terms from the approved jargon list for that digest. Then unit labels and vitals are localized with locale-appropriate formatting (digits and separators) and standard units (e.g., mmHg, bpm). Given a glossary entry is missing in L, Then the system falls back to a base-language lay equivalent or inserts a localized plain description; the gap is logged. Given agency-level glossary overrides exist, When generating digests for that agency, Then overrides take precedence for that agency only and are auditable (who, what, when).

Consent Circles

Lets agencies define share groups (e.g., Immediate Family, Care Proxy, Financial Contact) with field‑level visibility and time‑boxed access. Invites are verified, expirable, and revocable, with an approval trail so only the right people see the right details. This keeps updates compliant and personalized without IT overhead.

Requirements

Share Group Templates & Custom Groups
"As an operations manager, I want to define and reuse share groups tailored to each client so that the right stakeholders consistently receive only the information they’re permitted to see."
Description

Provide out‑of‑the‑box share group templates (e.g., Immediate Family, Care Proxy, Financial Contact) and allow agencies to create, clone, and customize groups per client or globally. Support membership management (invite, role/label, contact method), reusable templates, and assignment to individual clients or cohorts. Integrate with CarePulse’s client profiles, visit notes, scheduling, and reporting so that group membership determines what data is eligible to be shared. Include bulk assignment, search, and safeguards to prevent duplicate or conflicting groups. Ensure mobile-first management with offline-safe drafts and server-side validation.

Acceptance Criteria
Create Group from Template (Mobile, Offline Draft)
Given a mobile user is offline on the 'New Share Group' screen, When they select a built-in template (Immediate Family, Care Proxy, Financial Contact) and enter a unique group name, Then an offline draft is stored locally with templateId, groupName, scope (global or client), and a lastSaved timestamp. Given connectivity is restored, When the user taps 'Save & Sync', Then the server validates required fields, uniqueness of groupName within the chosen scope, allowed configuration keys, and returns 201 with groupId on success, or 4xx with field-level errors on failure, and the draft status updates to 'Synced' or 'Needs Fix'. Given the user discards the draft before syncing, When they confirm discard, Then no server request is made and the draft is removed from local storage.
Clone and Customize Group (Global Scope)
Given an existing group or template is visible, When the user selects 'Clone', Then a new editable draft is created copying field-visibility rules, roles, and membership exclusions, and appends 'Copy' to the name. When the user edits field-visibility rules, Then only admin-permitted fields are editable and disallowed fields are rejected with inline errors. When the user saves, Then the system records an audit log (actor, timestamp, sourceId, changeset), assigns a new version number (vN+1 if cloned from a template), and exposes the cloned group in the global catalog within 5 seconds.
Manage Membership: Invite, Verify, Role & Contact Method
Given a group assigned to a client, When an admin adds a member with role/label and contact method (SMS or Email) and sends invite, Then the system creates a pending member record with inviteId, role, contact method, and expiry TTL (default 72 hours) and dispatches a verification link/code. When the recipient completes verification within TTL, Then the member state becomes 'Active', the verified contact is stored, and an audit entry is recorded. When the invite expires or is revoked, Then the member state becomes 'Expired' or 'Revoked' respectively, access is blocked, and re-invite is allowed with a new inviteId.
Assign Groups to Clients and Cohorts
Given a saved group, When a user assigns it to a single client, Then the assignment appears on the client's profile within 2 seconds and is included in that client's sharing eligibility. Given a saved group, When a user assigns it to a cohort of N clients, Then N assignments are created to the current cohort snapshot; clients added to the cohort later are not auto-assigned. Given an assignment operation fails for one or more clients, When the process completes, Then the UI shows per-client error details and offers a retry for the failed subset only.
Field-Level Visibility Enforcement Across Modules
Given a client with an assigned group, When an authorized share is generated from client profile, visit notes, scheduling, or reporting, Then only the fields enabled by the group's visibility rules are included and all other fields are omitted or redacted. Given a field is disabled in the group's visibility rules, When a share preview is rendered, Then that field is absent and the system displays a 'hidden by group policy' indicator in admin preview mode. Given group visibility rules are updated, When a new share is generated, Then changes take effect immediately for new shares and do not retroactively alter previously delivered shares; this behavior is documented in the audit trail.
Bulk Assignment & Search
Given the group catalog is open, When a user searches by name, template type, role label, or last modified, Then results return in under 300 ms for up to 500 groups and include result counts. Given a user selects M clients (M up to 1,000) and chooses 'Bulk Assign', When they confirm, Then the system assigns the group to all selected clients, shows a progress indicator, and returns a completion summary with successes, failures, and retry options; no client is assigned more than once. Given a search query yields no results, Then the UI displays 'No groups found' and offers to create a new group using a template.
Duplicate and Conflict Safeguards
Given a user attempts to create or assign a group with a name that duplicates (case-insensitive) an existing group in the same scope (global or client), When they save, Then the system blocks the action and surfaces a 'Duplicate group name' error with a link to the existing group. Given a user attempts to assign two groups to the same client that have identical field-visibility rules and the same label, When they confirm, Then the system flags a conflict, prevents duplicate assignment, and suggests merging or reusing the existing group. Given similar groups exist, When the user types a new group name, Then typeahead surfaces existing groups and templates to reduce accidental duplicates.
Field‑Level Visibility Matrix
"As a compliance officer, I want granular, field‑level controls mapped to share groups so that only authorized details are visible to each audience across all surfaces and exports."
Description

Implement a field taxonomy and rules engine that maps data elements (e.g., vitals, meds, diagnoses, visit notes sections, schedules, addresses, billing fields, file attachments, IoT readings, voice-transcribed notes) to share groups. Provide a UI to configure visibility per field/section with presets and inheritance, plus redaction masks for partially visible fields (e.g., last‑4 only). Enforce rules across mobile and web UI, exports, and API responses, ensuring hidden fields are excluded or redacted at render and transport layers. Maintain configuration versioning and test previews (“view as group”). Validate changes with impact analysis and provide safe defaults aligned to compliance standards.

Acceptance Criteria
Preset and Inheritance Configuration
Given an admin with Manage Consent Circles permissions and an existing field taxonomy When the admin applies the preset "Care Team Only" at a section level and saves as draft Then all fields in that section become visible to the Care Team group and hidden from other groups unless an explicit field-level override exists Given a field with an explicit visibility override When a conflicting preset is applied at the parent section Then the field-level override takes precedence over inherited rules and presets Given the visibility matrix UI shows effective rules When the admin hovers a field Then the UI indicates whether the rule is Explicit, Inherited, or Preset-derived and lists the count of affected share groups
Partial Field Redaction Masks
Given a phone number field configured with a last-4 redaction for the Immediate Family group When viewed on web and mobile by a member of that group Then only the last four digits are visible and the remaining digits are replaced by masking characters, with no client-side ability to reveal the full value Given the same redaction context When exporting to CSV or PDF Then the exported value is identically masked and the unmasked value is not present anywhere in the file content or metadata Given the same redaction context When requesting the field via API with the Immediate Family group context Then the API returns the masked value and does not include any raw/unmasked field or alternate field exposing the value Given a group with no access to the field When requesting the field via API Then the field key is omitted entirely from the response payload
Cross-Platform UI Enforcement
Given a caregiver in the Care Team views a patient profile When a field is configured as Hidden for the Care Team group Then the field label and value do not render on web or mobile, and the value is not included in any client payload or state store Given offline caching is enabled on mobile When a field is configured as Hidden or Redacted for the active group Then no unredacted value for that field is stored or retained in local cache or offline storage Given server-side render and client logs are enabled When a hidden field is requested by the UI Then logs record an access-block event without logging the sensitive value or its unredacted content
Exports and API Transport Enforcement
Given an admin generates a Visit Notes PDF for a patient with mixed visibility across fields When the export completes Then hidden fields are omitted and redacted fields are masked per the active share group throughout the document, including tables and headers Given a CSV export of schedules is requested with a non-care share group context When a column maps to a hidden field for that group Then the column (header and values) is excluded from the file Given an API request is made without explicit share group context When the server evaluates visibility Then the minimum-necessary default policy is applied and no non-default-allowed fields are included in the response
Versioning and View as Group Preview
Given a draft visibility configuration with unpublished changes When the admin uses View as Group for Financial Contact on a selected patient Then the preview reflects the draft rules without affecting live users or current API/export behavior Given a published configuration version v3 exists When a new version v4 is published Then v3 is retained as read-only with timestamp, actor, and change summary, and v4 becomes the active version Given v4 causes incorrect visibility When the admin selects Rollback to v3 Then v3 becomes the active configuration and all subsequent UI/API/export requests enforce v3
Change Impact Analysis Gate
Given pending changes alter visibility for one or more fields When the admin clicks Publish Then an impact analysis report is displayed summarizing affected fields, share groups, estimated patient records impacted, and data surfaces (UI, exports, API), and Publish is disabled until the report is acknowledged Given the analysis detects any field becoming inaccessible to all share groups When the admin proceeds to publish Then the system blocks publish and requires assigning at least one authorized group or explicitly marking the field as Intentionally Hidden with justification Given the analysis completes without blocking conditions When the admin confirms Then publish proceeds and the full analysis report is stored alongside the new configuration version
Safe Defaults and New Field Onboarding
Given a new field IoT:HeartRate is added to the taxonomy When no visibility is configured Then the system applies safe defaults: visible to Care Team only, hidden from all external groups, with full-hide redaction Given a new share group is created without a chosen preset When the group is saved Then the group inherits the Minimum Necessary default that reveals only non-clinical administrative fields (e.g., appointment date/time) and hides clinical and billing fields Given a tenant enabling the feature for the first time When initialization runs Then the system seeds the recommended compliance-aligned preset and requires an admin review-and-confirm step before the first publish
Time‑Boxed Access Controls
"As a care coordinator, I want to grant access that automatically expires after a defined period so that temporary stakeholders can stay informed without long‑term exposure of client data."
Description

Enable per‑invite and per‑group access windows with start/expiry times, optional recurrence/renewal, and time zone awareness. Support read‑only vs. download permissions and session duration limits. Provide auto‑expire, manual extension, pending/paused states, and expiring access reminders. Enforce at token and session levels, including signed URL expiry for attachments. Log all expirations and changes for audit. Integrate with scheduling to suggest access windows aligned to visit plans or episode periods.

Acceptance Criteria
Per-Invite Access Window (Time Zone Aware)
- Given an invite with start 2025-10-01 08:00 and expiry 2025-10-01 20:00 in America/New_York, When the invitee authenticates from any locale, Then access is permitted only between the corresponding UTC instants; attempts outside return 403 AccessWindowClosed. - And the invite and UI display the window in the viewer’s local time with the original time zone label (e.g., EDT), including DST adjustments. - And if the start time is in the past at creation, Then the invite is created with state Active and remaining duration only; if expiry is in the past, creation is blocked with a validation error. - And changes to start/expiry by an owner take effect immediately for future requests and are recorded with timestamp, actor, old/new values, and reason in the audit log.
Per-Group Defaults with Recurrence and Invite Overrides
- Given a group default window weekdays 09:00–17:00 America/Chicago with recurrence weekly until 2025-12-31, When a new invite is created for this group, Then its access windows follow the default unless the creator sets an override. - And group-level changes update future windows of existing invites that opted in to follow group defaults, leaving past windows unchanged. - And renewal rules (auto-renew on expiry up to 3 times) are enforced, with each renewal creating a new window and audit entry. - And invite-level overrides narrower than group defaults are enforced; broader overrides require owner role and are tracked in the audit log.
Permission Enforcement — Read-Only vs Download
- Given an invite with permission=read-only, When the invitee views documents or attachments, Then inline viewing is allowed and all download/print/export endpoints and UI controls are disabled or return 403 PermissionDenied. - Given an invite with permission=download, When the invitee requests a download within the window, Then a time-limited signed URL is issued and the UI shows an enabled Download control. - And in both permission modes, write operations (POST/PUT/PATCH/DELETE) to protected resources return 403 PermissionDenied. - All permission decisions are captured in the audit log with resource, action, decision, and reason.
Session Duration Limits, Idle Timeout, Auto-Expire, and Manual Extension
- Given a configured sessionMax=30m and idleTimeout=10m, When an invitee authenticates, Then the session ends at min(login+30m, lastActivity+10m, windowExpiry), forcing re-auth with 440 SessionExpired. - And at T-2m before the earlier of sessionMax or windowExpiry, a warning banner appears; if an owner extends access by +30m before expiry, Then the session’s windowExpiry updates immediately without requiring re-login. - When the window expires, Then active sessions are invalidated within 60s, signed URLs are unusable, and state becomes Expired. - All session start/end, idle timeouts, and extensions are recorded in the audit log.
Pending and Paused Access State Lifecycle
- Given an invite is sent but identity not verified, Then state=Pending and all resource and attachment requests return 401 VerifyRequired. - When identity is verified within the window, Then state transitions to Active and access is granted. - When an owner sets state=Paused, Then access tokens are revoked, new sessions cannot be created, and active sessions are terminated within 60s; resuming sets state back to Active without changing the original expiry. - When an owner sets state=Revoked or expiry passes, Then access remains blocked; only a manual extension or re-issue creates a new Active window; each transition writes an audit entry.
Token and Signed URL Expiry Enforcement
- Given an Active invite, Then issued access tokens include nbf=start and exp=expiry claims; requests before nbf or after exp return 401 TokenNotValidYet/TokenExpired. - Signed URLs for attachments expire at min(windowExpiry, urlTTL) and are single-use if configured; reuse attempts return 403 UrlReused. - System tolerates clock skew of up to 60s; beyond that, requests are rejected. - Denied requests include machine-readable error codes and correlation IDs, and each denial is logged with reason and request metadata.
Scheduling-Aligned Window Suggestions and Expiry Reminders
- Given a client with visits scheduled 2025-10-03 09:00–11:00 and 15:00–16:00 in Europe/Berlin and a suggestion rule of start=2h before first visit, end=2h after last visit, When an owner creates an invite, Then the UI suggests 2025-10-03 07:00–18:00 Europe/Berlin as the window; accepting saves those times. - If the episode period is selected instead, Then the suggestion spans the episode start/end; suggestions respect the client’s time zone. - Expiry reminders are sent to the invitee and owner at T-24h and T-1h before expiry via configured channels (email/push), suppressed if state is Paused or Revoked; if expiry is extended, reminders reschedule accordingly. - Each suggestion acceptance/override and reminder sent is recorded in the audit log with timestamps and recipients.
Verified Invitations & Identity Proofing
"As an operations manager, I want recipients to verify their identity before viewing shared information so that we meet compliance requirements and prevent unauthorized access."
Description

Support invitation delivery via email/SMS with secure, expirable, single‑use links. Require recipient verification using OTP and device binding; optionally collect light identity proofing (DOB check, relationship confirmation, or document scan where policy requires). Implement rate limiting, domain allow/deny lists, and signed tokens with short TTL and rotation. Provide fallback manual verification with reason capture and supervisor approval. Store verification outcomes as evidence and restrict access until verification passes. Integrate with CarePulse user directory for existing contacts and handle NPI/proxy identifiers where applicable.

Acceptance Criteria
Secure Single-Use Invite Delivery
- Given an agency staff member sends an invite via email or SMS, when the invite is generated, then a signed token with a TTL of 15 minutes is embedded in a single-use link. - Given the invite link is opened within its TTL, when the token is validated, then the token is marked consumed and any subsequent redemption attempts return HTTP 410 Gone with no data exposure. - Given the invite link is opened after its TTL, when validation occurs, then access is denied and the user is presented an option to request a new link (no sensitive details shown). - Given an invite is resent by staff or auto-resend is triggered, when the new link is issued, then all prior tokens for that invite are immediately invalidated and cannot be redeemed. - Given the message is delivered, when inspected, then the body contains no PHI/PII beyond invitee first name and agency name, and the URL contains no PII in query parameters.
OTP Verification and Device Binding
- Given an invite link is redeemed, when the verification step begins, then a 6-digit OTP is sent via the same channel as the invite (SMS for phone, email for email). - Given the OTP is issued, when the recipient submits the code within 10 minutes and within 5 attempts, then verification succeeds and a device-bound session token is created for that browser/device. - Given verification succeeds, when the same device accesses within 30 days, then no additional OTP is required; when a different device accesses, then OTP verification is required again. - Given 5 incorrect OTP attempts or a 10-minute expiry, when further attempts occur, then the flow is locked for 15 minutes and an informational message is shown without revealing which field was incorrect. - Given logout or admin revocation, when applied, then the device binding is invalidated immediately and further access requires re-verification.
Policy-Driven Identity Proofing
- Given the agency policy requires DOB verification, when the recipient enters DOB, then the value must match the record on file exactly; after 3 failed attempts, the flow is locked for 15 minutes and manual verification is suggested. - Given the policy requires relationship confirmation, when the recipient selects a relationship from a controlled list, then it must match the invited role or be recorded for review; mismatches require manual approval before access. - Given the policy requires document scan, when the recipient submits an ID image, then basic quality checks (readable text, glare below threshold) pass and name/DOB match the record; on failure, the user may retry up to 2 times before requiring manual review. - Given any policy-based check succeeds, when all required checks complete, then the overall identity proofing result is set to Passed; otherwise it remains Pending or Failed with reasons captured.
Abuse Controls and Allow/Deny Lists
- Given invite sends are attempted, when rate limits are evaluated, then no more than 3 invites per contact per 24 hours and no more than 30 invites per staff account per 1 hour are permitted; excess attempts return HTTP 429 and are logged. - Given OTP submissions occur, when attempts are counted, then no more than 5 OTP attempts per 15 minutes per contact and per IP/device are allowed; excess attempts return HTTP 429 and require waiting 15 minutes. - Given email addresses are entered, when an allowlist is enabled, then the domain must be on the allowlist to proceed; when a denylist is configured, then domains on the denylist are blocked with an explicit error. - Given phone numbers are entered, when validation runs, then numbers must be in E.164 format and country restrictions (if configured) enforced; invalid entries cannot be sent.
Fallback Manual Verification with Supervisor Approval
- Given electronic verification fails or is not possible, when a staff member selects Manual Verification, then they must select a reason from a predefined list and enter a note of at least 20 characters. - Given a manual verification is submitted, when routing occurs, then a supervisor approval task is created and assigned per the agency’s approval matrix. - Given the supervisor reviews the task, when they approve, then the invitee’s verification status is set to Verified (Manual) and access is granted; when they reject, status remains Unverified and access is blocked. - Given manual verification is pending, when the invitee attempts access, then access is blocked with a non-disclosing message.
Audit Trail, Evidence Storage, and Access Gating
- Given any verification attempt occurs, when it completes, then an immutable audit record is stored including timestamp, method (OTP/DOB/Doc/Manual), result, channel, initiating staff (if applicable), device fingerprint, IP, and policy version. - Given a contact is not verified, when they attempt to view any protected fields within a Consent Circle, then the system returns HTTP 403 and no field-level data is rendered. - Given a contact becomes verified, when they access within the allowed time window and scope, then only fields permitted by the Consent Circle are visible. - Given verification is revoked or expires, when an active session exists, then the session is terminated within 60 seconds and subsequent access attempts are blocked until re-verification.
Directory Integration and NPI/Proxy Handling
- Given an invite is created using an email or phone that exists in the CarePulse directory, when the invitation is sent, then it links to the existing contact record and avoids creating a duplicate. - Given an NPI or proxy identifier is provided, when validation runs, then the identifier format is validated and, if a matching directory/registry record exists, it is linked; if not, the invite send is blocked with a clear error. - Given an invitee accepts the invite, when verification passes, then the directory is updated with relationship role and identifiers; if the contact did not exist, a new directory record is created at that time. - Given a proxy is linked, when access rules are evaluated, then the proxy relationship is reflected in Consent Circle permissions.
Consent Approval Workflow & Audit Trail
"As a compliance officer, I want a formal approval and audit trail for all consent actions so that we can demonstrate due diligence and meet regulatory audits without manual reconciliation."
Description

Introduce a configurable approval workflow where designated approvers (e.g., clinician, care proxy, supervisor) must approve new shares, extensions, or scope changes before activation. Capture who approved, timestamps, scope diffs, and rationale, with immutable, tamper‑evident logs. Provide an approval queue, notifications, SLAs, and escalation paths. Generate audit‑ready artifacts showing what was shared, with whom, which fields, and for how long. Integrate with CarePulse reporting for one‑click export to auditors and link each event to the client record and scheduler context.

Acceptance Criteria
Multi-Approver Approval for New Share
Given agency policy requires approvals from a Clinician and a Supervisor for group "Immediate Family" And a new share request for client C includes fields [Medications, Visit Notes] and duration 14 days for recipient R When the caregiver submits the request Then the system creates two pending approval tasks (assignees: designated Clinician, designated Supervisor) And sends push and email notifications to both assignees within 60 seconds And sets the share status to "Pending" and denies access to R until activation When both assignees approve within 72 hours Then the share activates immediately, exposing only [Medications, Visit Notes] to R for exactly 14 days from activation timestamp And the approval record includes approver IDs, timestamps (UTC), rationale text, and the approved scope snapshot When any assignee rejects Then the share remains inactive, requester is notified within 60 seconds with the rejection rationale, and the request status is "Rejected"
Approval Queue With Diff and Rationale
Given an approver opens the Approval Queue When they view a pending item Then the item displays requester, client, share group, recipients, proposed field list, proposed duration, and a before/after diff vs current access And Approve, Reject, and Request Changes actions are available And Approve and Reject require a rationale of at least 10 characters; otherwise save is disabled And taking any action writes an audit log entry and updates item status accordingly within 2 seconds (p95) And Request Changes returns the item to the requester with the comment and status "Changes Requested"
SLA Timers and Escalations for Pending Approvals
Given an agency SLA: first escalation at 24 hours to role "Supervisor", auto-cancel at 72 hours And an approval request remains Pending When 24 hours elapse without a decision Then the item is marked "Escalated", a notification is sent to all users with role "Supervisor" within 5 minutes, and an audit entry records the escalation When 72 hours elapse without a decision Then the request auto-cancels, the requester is notified within 5 minutes, and an audit entry records the SLA breach and auto-cancel action
Tamper-Evident Audit Log Capture
Given any approval, rejection, change request, activation, extension, or revocation event occurs When the event is committed Then an immutable log entry is created containing event type, actor ID, client ID, consent circle/group, recipients, before/after field scope, before/after duration, rationale, timestamps (created_at, effective_at), IP/device, and scheduler context IDs (visit ID, route ID if applicable) And the entry includes a SHA-256 content hash and previous entry hash to form a verifiable chain And calling GET /audit/verify?client_id={id}&range={start,end} returns {"valid": true} for an unaltered chain And attempts to modify or delete log entries via API return 405 and create a "LogMutationBlocked" audit event
One-Click Audit Export Integrated With Reporting
Given a user with Reporting permission selects a client or agency and a date range When they click "Export Audit" Then a PDF and CSV are generated containing who approved/denied, recipients, fields shared, duration, rationale, timestamps, and log hashes And the export is available for download within 30 seconds for up to 10,000 events, with a signed URL that expires in 7 days And each row links to client and scheduler context IDs And field values are masked unless the user also has "Export PHI" permission And the export metadata includes a chain verification summary (valid true/false, first/last hash)
Extension and Scope Change Re-Approval
Given an active share exists for client C and recipient R When a user proposes to extend the end date or add/remove fields Then a new approval request is created referencing the original share, showing a diff of proposed vs current scope and duration And R’s access remains unchanged until approval And if approved, the share updates immediately to the approved duration/fields; if rejected, no changes are applied And proposals exceeding agency policy limits (e.g., >30 days or disallowed fields) are blocked client-side and server-side with a clear validation error
Revocation and Access Termination Logging
Given a user with revoke permission selects an active share for recipient R When they confirm revocation with a rationale Then all active tokens/sessions for R related to that share are invalidated within 60 seconds And subsequent API calls by R to protected fields return 403 within 60 seconds of revocation And the share moves from Active to Revoked in the UI immediately And notifications are sent to R and the requester within 60 seconds And a revocation audit entry is written capturing who revoked, when, rationale, and affected scope
Revocation & Real‑Time Access Propagation
"As a care proxy, I want the ability to revoke someone’s access instantly so that sensitive information is no longer visible when circumstances change."
Description

Allow immediate revocation of group membership, individual invites, or entire groups with instant token invalidation and cache purge across web, mobile, and API. Propagate changes to open sessions, revoke signed URLs, and trigger redaction on refresh. Support reason codes, notifications to affected parties, and optional silent revocation. Provide undo within a grace period where policy allows, with full logging. Ensure resilience with eventual consistency safeguards and background reconciliation jobs to catch stragglers. Reflect revocation in all exports and activity logs.

Acceptance Criteria
Immediate Individual Revocation Across Channels
Given an active member or invitee has valid access/refresh tokens, at least one open session (web or mobile), and at least one active signed URL When an admin revokes that individual from a Consent Circle Then all access and refresh tokens for that identity are invalidated within 5 seconds p95 and 30 seconds p100 And all signed URLs for that identity return 403 within 5 seconds p95 And in-memory, edge, and CDN caches holding that identity’s scoped resources are purged within 10 seconds p95 And subsequent API calls using revoked tokens return 401/403 with error_code=ACCESS_REVOKED And open sessions display a revocation banner and redact protected fields on the next UI state refresh within 5 seconds p95
Full Group Revocation and Propagation
Given a share group has multiple active members and pending invites across web, mobile, and API clients When an admin revokes the entire group Then all memberships and pending invites in that group are invalidated within 5 seconds p95 and 30 seconds p100 And all group-scoped signed URLs return 403 within 5 seconds p95 And open sessions for affected identities downgrade visibility to exclude all fields governed by the group within 10 seconds p95 And an audit trail entry records actor, timestamp, group_id, member_ids, invite_ids, and propagation metrics And the group state changes to revoked and cannot be used to grant new access
Reason Codes, Notifications, and Silent Revocation
Given revocation requires a reason code from an allowed set and optional notification preference When an admin confirms revocation with notify=true Then affected identities receive in-app notification immediately and email/SMS within 2 minutes p95 without exposing PHI, including revocation_id and support contact And delivery outcomes (sent, bounced) are logged per channel When an admin confirms revocation with notify=false (silent) Then no external notifications are sent And all actions are captured in the audit log with fields: actor_id, reason_code, notify_flag, affected_scope, timestamp, revocation_id
Undo Within Grace Period
Given the organization policy defines a reversible grace period (e.g., 10 minutes) and the revocation falls within that window When an admin selects Undo for the revocation Then the system verifies that no superseding policy or explicit block prevents restoration And restores access by issuing new tokens and re-enabling group or membership references within 10 seconds p95 And previously issued signed URLs remain revoked; clients must request new URLs post-undo And an audit entry links undo_id to the original revocation_id with actor, timestamp, and result (restored|blocked) And affected parties are notified of restoration only if notify_on_undo=true per policy
Open Session Handling and Redaction on Refresh
Given affected users have active sessions on web and mobile, including foreground and background states When revocation occurs Then clients receive a real-time invalidation event (WebSocket/push) and apply field-level redaction immediately on UI refresh within 5 seconds p95 And any subsequent protected API calls return 401/403 with correlation_id tied to revocation_id And offline clients enforce revocation at next sync or within a maximum TTL of 15 minutes, whichever comes first And attempts to use refresh tokens after revocation are rejected server-side and device-stored tokens are cleared on next app resume
Eventual Consistency Safeguards and Reconciliation
Given revocation messages may fail or be delayed in distributed components (edge caches, queues, mobile push) When propagation anomalies are detected (e.g., access post-revocation) Then a reconciliation job runs at least every 60 seconds to revalidate revocation state across caches, signed URLs, and session stores And the job is idempotent and retries with exponential backoff on transient errors And metrics report p95 propagation time, straggler_rate < 0.1%, and max_exposure_window <= 30 seconds for online clients And alerts trigger within 2 minutes if straggler_rate exceeds threshold for 5 consecutive intervals
Exports and Activity Logs After Revocation
Given exports (on-demand and scheduled) and their signed URLs may exist before revocation When revocation is executed Then previously issued export signed URLs are revoked and return 403 within 5 seconds p95 And any new exports generated after revocation exclude fields no longer permitted by the updated policies And attempting to access pre-revocation exports requires re-authorization with a fresh policy check And the activity log records revocation, URL invalidations, affected exports, and any subsequent denied access attempts with correlation to revocation_id
Consent Versioning & E‑Signature Capture
"As a clinician, I want signed, versioned records of what was consented to and when so that I can confidently share updates and retrieve proof during reviews or audits."
Description

Maintain versioned consent records that capture consent scope, field mappings, recipients, durations, and legal language at the time of activation. Support e‑signature from clients or authorized proxies, multi‑language templates, and accessible presentation. Generate immutable PDFs and machine‑readable JSON, linked to the client and share group. Surface historical versions with effective and expiry dates and provide “what changed” diffs. Ensure signatures and versions are available in one‑click compliance reports and can be re‑presented to recipients on demand.

Acceptance Criteria
Versioned Consent Capture at Activation
Given a caregiver activates a consent for a client and share group When they submit the consent form with scope, field mappings, recipients, duration, and selected legal language template version Then the system creates a new immutable consent version with a unique version ID and activation timestamp And stores the exact legal language text as of activation And stores the explicit field mapping list and recipient list snapshot And stores effective start and calculated expiry date per duration And links the consent version to the client and share group
E-Signature by Client or Authorized Proxy
Given a consent version is ready for signature and a recipient is identified as client or authorized proxy When the recipient completes identity verification and signs on mobile or desktop Then the system captures signature image, full name, verification method, signer role, locale, IP, device, and timestamp And binds the signature to the consent version and page coordinates And records consent acceptance checkbox and disclosure acknowledgment And prevents submission if required fields are missing or verification fails And marks the consent version status as Signed
Accessible, Multi-Language Presentation
Given a consent is presented for signing When the recipient selects a language or one is auto-detected Then the consent text, labels, and disclosures render using the selected template language And all interactive elements meet WCAG 2.1 AA for contrast, keyboard navigation, focus order, and screen-reader labels And the font size can be increased to 200% without loss of content or functionality And language and accessibility settings are captured in the audit log
Immutable PDF Generation
Given a consent version is Signed When the system generates artifacts Then it produces a paginated PDF containing version ID, effective and expiry dates, signer details, signature, and legal text And applies a non-editable flag and embeds a SHA-256 hash of the PDF contents in metadata And stores the PDF in write-once storage with the computed hash and storage URI And subsequent regeneration yields an identical hash
Machine-Readable JSON Artifact
Given a consent version is Signed When the system generates artifacts Then it produces a JSON document containing version ID, client ID, share group ID, scope, field mappings, recipients, template language and ID, effective and expiry dates, signature metadata, and artifact hashes And the JSON validates against the published schema version And the JSON is stored and retrievable via API with authorization And the JSON and PDF hashes cross-reference each other
Historical Versions & What-Changed Diff
Given multiple signed versions exist for the same client and share group When a user views consent history Then the system lists versions with effective and expiry dates, status, and language And selecting two versions shows a field-level diff highlighting added/removed/changed scopes, mappings, recipients, durations, and legal language And the diff excludes unchanged fields and is exportable as PDF And the history view respects user permissions
One-Click Compliance Report Inclusion & Re-Presentation
Given a compliance auditor requests artifacts for a client When a user clicks Generate Compliance Pack Then the system bundles the latest signed consent PDF, JSON, history index, and diff summaries into a single downloadable package And includes a verification sheet with artifact hashes and generated-on timestamp And recipients can be re-presented the latest consent via a Resend action that issues a new expirable, verifiable invite linked to the current version And a re-presentation event is logged with who, when, and delivery channel

SafeLink OTP

Delivers read‑only daily SMS summary links that are one‑tap, one‑time, and time‑bound, protected by a simple PIN/OTP. Links adapt to the recipient’s device, auto‑expire, and log view receipts so staff know when updates were seen. Families get effortless, secure access without installing an app; agencies keep tight control and auditability.

Requirements

Time‑Bound Single‑Use Link Generation
"As a family member, I want a secure one-tap link that expires after I view it so that I can quickly check today’s update without risking ongoing access or data exposure."
Description

Generate signed, single-use URLs for daily read-only care summaries with configurable time-to-live (e.g., 24 hours by default) and immediate revocation on consumption or when a new link is issued. Tokens embed least-privilege claims (patient, date scope, content flags) and are resistant to tampering via rotating signing keys. Handle clock skew, regenerate flows, and edge cases (duplicate taps, offline opens) while enforcing strict cache-control headers and noindex to prevent persistence or discovery. Integrates with CarePulse scheduling and notes to assemble the minimal PHI snapshot at link creation time, not on open, to ensure consistency and auditability.

Acceptance Criteria
Configurable TTL with Default 24 Hours
Given a daily read-only care summary link is created without a custom TTL When the recipient opens the link within 24 hours of creation (server time) Then the link returns HTTP 200 with the summary content and no PHI is exposed beyond the defined snapshot And the response includes an absolute expiry timestamp in headers Given a link is created with a custom TTL of 2 hours When the recipient opens the link after 2 hours + 1 minute (server time) Then the link returns HTTP 410 Gone and reveals no PHI Given client and server clocks differ by up to ±5 minutes When the recipient opens the link near the expiry boundary Then expiry is evaluated using server time and the link remains valid until server-side TTL elapses Rule: Expired links must return HTTP 410 Gone and display a generic expired message without PHI
Single-Use Consumption and Immediate Revocation
Given a single-use link is created and not yet opened When the recipient successfully opens the link Then the link is marked as consumed immediately and a view receipt (timestamp, IP, user-agent) is recorded And the response is HTTP 200 with the snapshot content Given the same link has been consumed When it is opened again from any device or browser Then the response is HTTP 410 Gone with no PHI in body or headers And the audit log records a blocked re-open attempt
Auto-Revocation on New Link Issuance for Same Scope
Given Link A exists for recipient R, patient P, date D When Link B is issued for the same recipient R, patient P, date D Then Link A is immediately revoked and cannot be used And opening Link A returns HTTP 410 Gone with reason 'superseded' And opening Link B within its TTL returns HTTP 200 And an audit event records revocation of Link A with reference to Link B
Least-Privilege Claims and Scope Enforcement
Given a token contains claims patient_id=P, date_scope=D (ISO date), and content_flags=F When the link is opened Then only snapshot content for patient P on date D and fields allowed by F are returned And no endpoints other than the read-only summary are accessible with this token Given a request attempts to access a different patient or date than in the token When the server evaluates the request Then the response is HTTP 403 Forbidden with no PHI in body Rule: Tokens missing required claims (patient_id, date_scope, content_flags) result in HTTP 400 and no PHI
Tamper Resistance and Signing Key Rotation
Given a link token signed with the current active private key When the token is unmodified Then signature verification passes and the request may proceed subject to other checks Given the token payload or signature is tampered When the link is opened Then signature verification fails and the response is HTTP 401 Unauthorized with no PHI And the audit log records a signature failure Given signing keys are rotated and a new key becomes active When an existing unexpired token signed with the previous key is opened Then verification succeeds using the previous public key and the link remains valid until TTL Rule: Tokens must include a key identifier (kid) and algorithm; tokens signed with unknown kid or disabled algorithms must return HTTP 401
No Caching and No Indexing Controls
Given the read-only summary is served When the response is returned Then headers include: Cache-Control: no-store, no-cache, max-age=0, must-revalidate, private; Pragma: no-cache; Expires: 0; X-Robots-Tag: noindex, nofollow And the HTML includes <meta name="robots" content="noindex,nofollow"> Given the link has been consumed or expired When the user navigates Back/Forward or refreshes Then no PHI is displayed and the response is HTTP 410 Gone Given the device is offline When the link is opened Then no PHI is rendered and a generic offline/expired message is shown And no Service Worker is registered for the path and no PHI is stored in localStorage/sessionStorage
Snapshot Consistency at Creation Time
Given a link is created at T0 When underlying scheduling or notes data changes after T0 Then opening the link returns the immutable snapshot assembled at T0 and not the updated data And only the minimal fields allowed by content_flags are present; no additional PHI fields are returned And an audit record persists snapshot_id and created_at=T0 Rule: The snapshot is assembled at link creation, not at open; identical inputs at T0 must produce an identical snapshot hash
SMS Delivery and Device-Adaptive Deep Link
"As an operations manager, I want daily summary links reliably delivered by SMS and to adapt to any device so that families can access updates without installing an app."
Description

Send daily summary links via compliant SMS using a vetted gateway with delivery status callbacks, per-country sender ID rules, STOP/HELP handling, and rate limiting. Format messages with short branded URLs, preview text, and fallback long URLs when link wrapping is blocked. On open, detect device and render the optimal view (mobile-first, desktop-capable) without requiring app install, supporting default browsers and in-app viewers. Queue and retry undelivered messages, surface delivery outcomes to staff, and localize content where agency templates require.

Acceptance Criteria
Daily Summary SMS Sent with Delivery Callbacks and Localization
- Given a scheduled daily summary is due and the recipient has not opted out, When the send job runs, Then the system transmits an SMS via the configured, compliant gateway using the agency’s approved sender profile. - Given localized templates exist for the recipient’s locale, When the message is generated, Then the SMS content and dates/numbers are localized per template and locale, with all merge fields populated. - Given the gateway returns status webhooks, When callbacks are received, Then the message record updates to Delivered, Failed, or Undeliverable with the gateway code and timestamp within 60 seconds of callback receipt. - Given duplicate prevention, When a message with the same recipient, template, and schedule window is already Sent or Delivered, Then no additional SMS is sent.
Per-Country Sender ID and STOP/HELP Compliance
- Given a recipient country, When sending an SMS, Then the system uses the country-appropriate pre-registered sender type and ID; if no compliant sender is available, the message is not sent and an error is logged and surfaced to staff. - Given the recipient replies with STOP or a recognized opt-out keyword, When the inbound message is processed, Then the number is immediately marked opted-out, a confirmation SMS is sent, future sends are suppressed, and an audit entry is recorded. - Given the recipient replies with HELP, When the inbound message is processed, Then an auto-reply is sent containing the agency name, support contact, and STOP/HELP instructions, and the interaction is logged. - Given the recipient replies START/UNSTOP in a market where it is supported, When processed, Then the number is re-subscribed and a confirmation is sent. - Given carrier-level opt-outs detected by the gateway, When a provider-side STOP event occurs, Then the system synchronizes the opt-out status within 60 seconds.
Branded Short URL with Preview and Fallback
- Given link generation, When composing the SMS, Then the message contains a branded short URL using the agency’s domain and a human-readable preview line. - Given environments that block or rewrite short links, When composing the SMS, Then a fallback long URL to the same resource is included; at least one of the URLs is clickable in default SMS apps. - Given SMS segmentation, When the message exceeds the configured segment limit (e.g., 2 segments), Then preview text is truncated without removing either URL. - Given shortener service degradation, When short URL creation fails, Then the system sends the SMS with only the long URL and flags the message record accordingly. - Given URL resolution, When the short URL is opened, Then it redirects to the deep link with all required parameters intact.
Queue, Retry, and Rate Limiting for SMS Delivery
- Given agency rate limits, When a batch send exceeds the configured threshold, Then additional messages are enqueued FIFO and dispatched without breaching the limit. - Given a temporary gateway failure or throttling response, When sending fails with a retryable code, Then the system retries with exponential backoff up to 3 attempts; on non-retryable codes, it stops retrying. - Given retries, When a message send is re-attempted, Then idempotency ensures the recipient does not receive duplicates. - Given queued or failed sends, When staff view the message list, Then each message shows its current state (Queued, Retrying, Delivered, Failed) and last error/retry ETA. - Given final failure after all retries, When no delivery is achieved, Then an alert is surfaced to staff with the failure reason code.
Device-Adaptive Deep Link Rendering Without App Install
- Given a valid deep link is opened, When accessed from a phone or tablet, Then the mobile-first read-only summary renders correctly within in-app viewers and default browsers without prompting for app install. - Given a valid deep link is opened, When accessed from a desktop browser, Then a desktop-capable view renders with equivalent content and functionality. - Given restrictive webviews (e.g., blocked cookies/localStorage), When the link is opened, Then the page renders the read-only summary using token-based context without requiring login. - Given an unsupported or legacy browser is detected, When the link is opened, Then a minimal responsive fallback view is served with the essential summary and a notice to upgrade.
Delivery Outcomes and View Receipts Visible to Staff
- Given tracking is enabled, When the recipient successfully loads the summary, Then a view receipt is recorded with timestamp, device type, and IP region. - Given multiple opens, When the link is accessed again, Then the system increments the view count and updates the last viewed timestamp without duplicating the send record. - Given staff review communications, When viewing the recipient’s message log, Then statuses Sent, Delivered, Failed, and Viewed are displayed with times and gateway codes. - Given a message remains unviewed for 24 hours or has Failed status, When staff check the dashboard, Then the message is flagged for follow-up.
PIN/OTP Verification Gate
"As a privacy-conscious family caregiver, I want to enter a quick code before viewing the link so that I know the information is protected even if the text is forwarded."
Description

Require a simple 4–6 digit code before the summary loads, supporting both on-demand OTP via SMS and optional pre-shared PIN per contact. Provide frictionless entry (numeric keypad, accessible focus) with configurable attempt limits, exponential lockouts, IP/device throttling, and secure resend flows that do not disclose account existence. Store only salted PIN hashes, never raw codes, and prevent code reuse within TTL. Maintain user-centric error messaging and support fallback verification for shared phones where allowed by agency policy.

Acceptance Criteria
Successful Verification via OTP or Pre‑Shared PIN
Given a valid SafeLink summary URL within its active TTL, When the recipient taps the link on a mobile device, Then the verification screen auto-focuses the first input and invokes a numeric keypad. Given the recipient enters a 4–6 digit code, When the code matches the current valid OTP or the configured pre‑shared PIN, Then access is granted to the read-only summary and a view receipt is logged within 1s of server decision. Given access is granted, Then a session token is issued, bound to the device/browser, and it expires no later than the link TTL.
Brute-Force Protection and Rate Limiting
Given incorrect code entries, When the user reaches the configurable attempt limit (default 5 attempts within 10 minutes), Then the system enforces an exponential lockout starting at 30s and doubling per violation up to 15 minutes. Given verification attempts from the same IP/device exceed the configurable rate (default 10 attempts per minute), When the threshold is crossed, Then further attempts are throttled with a 429/too-many-attempts response for at least 60s. Then all lockout and throttle responses use generic, non-disclosing messaging and do not indicate whether a PIN/OTP exists.
Secure Resend Flow with Non-Disclosure
Given a user is on the verification screen, When they request a new OTP, Then the system rate-limits resends (default max 1 every 30s, 3 per hour) and returns a generic confirmation regardless of account existence. Then a new OTP is generated and the previously issued OTPs for that link/contact are invalidated immediately. Then the SMS contains no summary content or PII beyond the masked recipient and instructions, and all resend events are audit-logged (time, channel, masked recipient, IP/device).
One-Time Code and TTL Enforcement
Given an OTP is issued, When it is used successfully once, Then any subsequent use of the same OTP is rejected even if within TTL, with a generic invalid-code response. Given an OTP or link exceeds its TTL (configurable, default 24 hours for daily summaries), When any verification is attempted, Then access is denied with a generic expired-link response and a path to request a new code. Then only numeric codes of length 4–6 are accepted; any other input is rejected client- and server-side.
Secure PIN Storage and Sensitive Data Handling
Given a pre‑shared PIN is created or updated, Then only a salted, iterated hash is stored with a per-record salt; no raw PIN values are persisted or logged. Then OTP values are never persisted in logs and are stored only ephemerally for validation; any transient storage is encrypted in transit and at rest and expires at or before TTL. Then application, analytics, and crash logs redact all codes and mask phone numbers (e.g., last 2–4 digits only).
Accessible, User-Centric Error Messaging
Given an invalid, expired, throttled, or locked-out state, When the error is shown, Then the message is clear, human-friendly, and non-disclosing (does not confirm account or method) and suggests next steps. Then the error is announced via an ARIA live region within 500ms, focus moves to the message container, and the flow is fully operable by keyboard and screen readers. Then the code input enforces digit-only entry, supports paste, triggers the numeric keypad on mobile, and meets WCAG 2.2 AA color contrast and focus visibility.
Policy-Gated Fallback for Shared Phones
Given the agency policy enables Shared Phone Fallback, When the user selects "Using a shared phone" or fails OTP within the first attempt window, Then a configured secondary challenge is presented (e.g., agency-configured knowledge factor) without revealing account existence. When the secondary challenge is answered correctly within a configurable max of 3 attempts, Then access is granted and the method is recorded as "fallback" in audit logs; otherwise, lockout and non-disclosing errors apply. When the policy is disabled, Then no fallback UI or endpoints are exposed.
Read‑Only Care Summary View
"As a family member on a slow connection, I want a lightweight, read-only summary so that I can see today’s status instantly without installing anything."
Description

Render a responsive, accessible, read-only summary showing today’s visit status, arrival window, key vitals/observations, and next steps with minimal PHI. Disable editing, form inputs, and file download endpoints; set strict no-store caching and content-security policies; obfuscate patient identifiers beyond what’s essential. Support large text, screen readers, high-contrast themes, and multilingual copy where configured. Ensure the view degrades gracefully on low bandwidth and avoids heavy assets so one tap loads quickly on budget devices.

Acceptance Criteria
Read-Only UI and Endpoint Lockdown
Given a valid SafeLink OTP read-only session, when the care summary loads, then the DOM contains no input, textarea, select, or elements with contenteditable="true". Given the OTP token, when any POST, PUT, PATCH, or DELETE request is attempted against care records, then the API responds 401 or 403 and no data is modified (verified by subsequent GET returning unchanged data). Given the OTP token, when any file download endpoint (e.g., /attachments/*) is requested, then the server responds 403 and returns no file content. Given the page is open, when a user pastes or drags a file into the view, then no upload request is sent and no file chooser is opened.
Security Headers: No-Store and CSP
Given an HTTP 200 response for the read-only summary, then response headers include Cache-Control: no-store and Pragma: no-cache. Given the response, then a Content-Security-Policy header is present with at least default-src 'self'; object-src 'none'; frame-ancestors 'none'; script-src 'self' with a nonce; and it does not include 'unsafe-inline' or 'unsafe-eval'. Given CSP is enforced, when an inline script is injected, then the browser blocks it (CSP violation) and the page remains functional.
Minimal PHI and Identifier Obfuscation
Given a read-only summary, then the patient display name shows initials only (e.g., J. D.) and any internal IDs are masked to last 4 characters (e.g., ••••1234). Then the view never renders full date of birth, full street address, SSN, full MRN, or caregiver last names. Given the backing API payload for the OTP view, then it excludes the above disallowed fields. Given an automated content scan of the rendered page, when checking text nodes, then no patterns matching full DOB (YYYY-MM-DD or MM/DD/YYYY), SSN (XXX-XX-XXXX), or 9+ consecutive digits are present.
Required Content, Layout, and Empty States
Given today’s schedule exists, when the summary loads, then it displays visit status (Scheduled|En Route|Arrived|Completed|Missed), arrival window (start–end) in the recipient’s local time zone, last sync timestamp, key vitals with units and capture time, observations, and next steps. Given any of those data points are unavailable, then the view shows explicit “Not available” placeholders without exposing null/undefined and without visual layout breakage. Given times are shown, then they are localized to the link’s configured locale and time zone. Given the local day boundary is crossed, when the view is refreshed, then it switches to the new day’s summary.
Accessibility, Localization, and Theming
Given the view loads, then it meets WCAG 2.1 AA for keyboard navigation (no keyboard traps; visible focus), semantics (landmarks and headings), and color contrast (>= 4.5:1 for text). Given a screen reader (NVDA/JAWS/VoiceOver/TalkBack), when navigating the page, then section headings, labels, and visit status are announced with correct roles and reading order. Given user text size is increased to 200%, then all content remains readable without overlap or horizontal scroll at 320px width. Given prefers-contrast or a manual high-contrast toggle, then a high-contrast theme is applied with AA contrast and preserved affordances. Given a configured locale (e.g., es, ar), then all UI strings, dates, numbers, and plurals are localized; missing translations fall back to English; right-to-left locales render with RTL direction and correct reading order.
Low-Bandwidth Performance and Graceful Degradation
Given a throttled 400 kbps/150 ms RTT network and a budget Android device, when opening the link, then First Contentful Paint <= 2.5 s and Time to Interactive <= 4.0 s. Then total transfer size for the initial render <= 150 KB, of which JavaScript <= 50 KB; no single image > 50 KB; no blocking web fonts are requested. Given JavaScript fails to load, then server-rendered HTML shows the core summary (status, arrival window, vitals, next steps) without blank screens. Given a dependent asset fails (e.g., sensor chart), then a textual fallback and retry control are shown without console errors blocking interaction.
Responsive Rendering on Budget Devices
Given common viewport widths (320, 375, 414, 768, 1024 px) in portrait and landscape, then the layout has no horizontal scrolling and touch targets are at least 44x44 px. Given orientation changes, then content reflows without truncation, overlap, or loss of context/state. Given high-DPI screens (device pixel ratio >= 3), then text and icons render crisply without blurry scaling. Given system dark mode, then colors meet contrast guidelines and preserve readability, or a neutral theme is used if dark mode is not supported.
View Receipts and Audit Trail
"As a care coordinator, I want to see when a recipient actually viewed the update so that I can follow up promptly if the message wasn’t seen."
Description

Capture a full event timeline per link: SMS sent/queued/failed, link opened, OTP verified, view rendered, expiration or revocation reason, and subsequent access attempts. Record sanitized metadata (timestamps, delivery provider IDs, coarse geolocation, device type) without logging PHI in transport logs. Expose receipts in CarePulse for staff, enable exportable audit reports, and surface alerts when links aren’t viewed within a configurable window. Respect retention policies and provide search/filter across patients, recipients, and campaigns.

Acceptance Criteria
Log SMS Delivery Lifecycle Events Without PHI
- Given a SafeLink is requested for a recipient, when the SMS is queued by the delivery provider, then an event sms_queued is recorded with ISO-8601 UTC timestamp and provider_message_id and without storing SMS body or PHI. - Given the provider indicates message sent, when the webhook is received, then an event sms_sent is recorded within 5 seconds of receipt with timestamp and provider_message_id and no PHI. - Given the provider indicates delivery, when the webhook is received, then an event sms_delivered is recorded with timestamp and provider_message_id, and the transport log contains no patient identifiers or visit details. - Given the provider indicates failure, when the webhook is received, then an event sms_failed is recorded with timestamp, provider_message_id, and sanitized error_code/error_category (no raw message content), and the event is visible in the per-link timeline. - Given any SMS lifecycle event occurs, when it is stored, then the record includes link_id and recipient_id references only (no names), is immutable, and is queryable in the audit timeline.
Track Link Opens and OTP Verification
- Given a recipient taps the SafeLink URL, when the request reaches the server, then an event link_opened is recorded with timestamp, device_type (mobile/desktop/tablet), os/browser major versions, and coarse_geolocation (city/region/country), without storing the raw IP address or GPS coordinates. - Given a recipient submits an OTP/PIN, when verification succeeds, then an event otp_verified is recorded with timestamp and attempt_count, and no OTP value is persisted. - Given a recipient submits an OTP/PIN, when verification fails, then an event otp_failed is recorded with timestamp and attempt_count, and rate-limit outcomes (if any) are recorded without storing the entered value. - Given a link is opened multiple times, when events are recorded, then open counts and distinct device types are queryable for the link timeline and export.
Record View Render, Expiration/Revocation, and Subsequent Attempts
- Given OTP verification succeeds, when the summary view is rendered, then an event view_rendered is recorded with timestamp and device_type and no PHI about the rendered content. - Given a link reaches its TTL, when the expiration job runs, then an event link_expired is recorded with timestamp and reason ttl_elapsed, and the link becomes inaccessible. - Given an authorized staff revokes a link, when the revocation is confirmed, then an event link_revoked is recorded with timestamp and reason manual_revocation including revoker_user_id, and the link becomes inaccessible. - Given a user attempts access after expiration or revocation, when the request arrives, then an event access_denied is recorded with timestamp and denial_reason (expired or revoked) and sanitized metadata (device_type, coarse_geolocation). - Given any of these events occur, when displayed in the timeline, then events are ordered by timestamp descending and show event type, reason (if any), and sanitized metadata.
Staff UI: Per-Link Timeline with Search and Filters
- Given a staff user with proper permissions opens a link detail page in CarePulse, when the page loads, then a chronological event timeline is displayed containing sms_*, link_opened, otp_*, view_rendered, link_expired/link_revoked, access_denied, and alert_* events with their timestamps and sanitized metadata. - Given the event timeline exceeds 100 items, when the page loads, then pagination or infinite scroll is available and returns additional pages within 2 seconds for the next 100 events. - Given staff enter filters, when filtering by patient, recipient, or campaign, then results include only links matching the filters and the filter pills reflect active filters. - Given staff search by link_id, recipient phone/email, or campaign name, when the query is submitted, then results return within 2 seconds for datasets up to 10,000 links and are sortable by newest/oldest. - Given staff lack permission, when attempting to access the receipts, then the UI denies access and no event data is exposed while logging an authorized_access_denied event in the platform audit log (not PHI).
Exportable Audit Report (CSV/PDF) Without PHI
- Given staff selects one or more links or a date range, when Export Audit Report is requested, then a CSV and a PDF can be generated that include per-link timelines with event_type, timestamp (UTC and selected timezone), provider_message_id, device_type, os/browser major version, coarse_geolocation, reason, and do not include PHI or content payloads. - Given the export is generated, when the file is downloaded, then the number of rows per link equals the number of events visible in the UI timeline for the same filters. - Given timezone is selected, when the export is generated, then timestamps are shown in both UTC and the selected timezone with offset indicated and consistent formatting. - Given an export completes, when it is logged, then an export_generated event is recorded with requester_user_id, timestamp, filter summary, and file format. - Given exports are requested for data outside retention, when generation runs, then the file excludes purged events and the report header indicates that retention limits applied.
Unviewed Link Alerts Within Configurable Window
- Given a global or campaign-level alert window (e.g., 24 hours) is configured, when a link has not logged view_rendered within the window after sms_sent or sms_delivered, then an alert_triggered event is recorded and a staff notification is queued. - Given an alert is queued, when delivery occurs, then an alert_sent event is recorded with timestamp and channel (in-app/email) and no PHI in the alert payload. - Given multiple alerts could be sent, when the window has already triggered for a link, then no duplicate alert is sent unless the link is reissued or the window is reset. - Given a link is later viewed, when the view_rendered event arrives, then the UI shows the alert as resolved and a alert_resolved event is logged with timestamp and resolution reason viewed. - Given alerts are configured off, when evaluation runs, then no alerts are sent and no alert events are produced.
Retention Policy Enforcement and Purge Evidence
- Given a retention period (e.g., 90 days) is configured, when events exceed the period, then they are purged by a scheduled job and become non-queryable in UI and exports. - Given a purge runs, when it completes, then a purge_summary event is recorded with timestamp, link_count_affected, event_count_deleted, and retention_window_days, without listing specific recipient identifiers. - Given a staff queries a date range beyond retention, when results are returned, then only events within retention are shown and the UI displays a retention notice. - Given legal hold is enabled for a campaign, when purge runs, then events for that campaign are excluded from deletion and an entry is added to purge_summary noting the exception. - Given backups exist, when a purge completes, then restored data remains compliant such that restored datasets do not reintroduce purged events into queryable stores.
Admin Controls and Message Templates
"As an agency administrator, I want to configure link behavior and templates so that SafeLink OTP matches our privacy policy and communication style."
Description

Provide an admin console for agencies to configure expiry duration, OTP length and mode (PIN vs OTP), delivery windows, resend rules, and per-recipient access scope. Support branded SMS templates with variable placeholders (patient first name initial, date, agency name) and preview/testing tools. Allow bulk sending for daily batches, ad-hoc resend with reason capture, and one-click revoke/regen. Enforce role-based permissions and change logs for all configuration edits.

Acceptance Criteria
Configure Expiry Duration and OTP Settings
Given I am an Agency Admin in the SafeLink settings console, When I set link expiry to a value between 15 minutes and 48 hours and save, Then the value persists and is applied to newly generated links. Given I enter an expiry outside the allowed range, When I attempt to save, Then I see a validation error and the change is not saved. Given I select the challenge mode (PIN or OTP) and set the length between 4 and 8 digits, When I save, Then newly generated links require the selected mode and length. Given I generate a test link after changing mode/length, When I open the test link, Then the challenge presented matches the configured mode and digit length. Given settings are saved successfully, When I refresh or re-open the console, Then the last saved values are displayed as the effective policy.
Set Delivery Windows and Automated Resend Rules
Given I configure a daily delivery window (start and end time in the agency’s timezone), When a message is scheduled outside the window, Then it is queued for the next window start within the same ruleset. Given I set a max resend attempts (0–3) and an interval (e.g., every 4 hours), When no view receipt is recorded within the interval, Then the system resends up to the configured max attempts within the delivery window. Given a message receives a view receipt before the next resend time, When the interval elapses, Then no further resends are sent for that message. Given I update the delivery window or resend rules and save, When pending resends are recalculated, Then only future events are updated and a summary of changes to the schedule is shown. Given I disable automated resends (max attempts = 0), When messages are sent, Then no automated resends are scheduled.
Define Per-Recipient Access Scope
Given I configure an access scope for a recipient or recipient group, When I save, Then the scope is stored and linked to that recipient or group. Given a recipient opens a SafeLink, When content is rendered, Then only fields permitted by that recipient’s scope are displayed and disallowed fields are redacted. Given I use the scope preview tool with sample data, When I preview, Then I see exactly which fields will be visible for that scope. Given I change a recipient’s access scope, When I regenerate a link for that recipient, Then the regenerated link enforces the updated scope and the prior link remains unchanged unless revoked.
Create, Preview, and Test Branded SMS Templates
Given I create or edit an SMS template using supported placeholders {patient_initial}, {date}, {agency_name}, and {link}, When I save, Then the template is validated and saved successfully. Given I include any unsupported placeholder, When I attempt to save, Then a clear validation error lists the unsupported tokens and the template is not saved. Given I click Preview on a template, When sample data is applied, Then I see the fully resolved message text including the branded elements and the {link} placeholder rendered as a sample URL. Given I choose Test Send and enter a verified test number, When I confirm, Then the test message is delivered using the template and is marked as a Test event in logs, not counted toward production reporting. Given a default template is set for daily batches, When a bulk send is initiated without an override, Then the default template is used.
Execute Bulk Daily Batch Sending
Given I select a date and the daily recipient cohort in the bulk sending tool, When I click Send Batch and confirm, Then SafeLinks are generated and sent for all eligible recipients using the selected template and each recipient’s access scope. Given some recipients are ineligible (e.g., missing phone number), When the batch completes, Then the summary report lists sent, queued (outside delivery window), failed with reasons, and skipped with reasons. Given automated resend rules are configured, When the batch is sent, Then resends are scheduled only for messages that do not receive a view receipt within the interval and remain within the delivery window. Given rate limits are configured by the SMS provider, When the batch is processed, Then sending respects provider limits without dropping messages and queues excess within the delivery window.
Ad-Hoc Resend, Revoke, and Regenerate SafeLinks
Given I locate a prior SafeLink event in Message History, When I click Resend and provide a mandatory Reason (minimum 5 characters), Then a resend is queued subject to resend rules and the reason is stored with the event. Given a link has reached its configured max resend attempts, When I try to Resend, Then the action is blocked with a clear explanation and no resend is queued. Given I click Revoke on a SafeLink, When I confirm the action, Then the link becomes immediately invalid and any subsequent open attempts show a Revoked/Expired message. Given I click Regenerate on a SafeLink, When I confirm, Then a new link is created using current policy (expiry, OTP mode/length, scope) and the prior link is invalidated. Given I perform Resend, Revoke, or Regenerate, When I view the event details, Then the actor, timestamp, action type, reason (if provided), and affected link IDs are recorded.
Enforce Role-Based Permissions with Change Logging
Given role-based permissions are configured (e.g., Admin, Manager, Staff), When a user without permission attempts to access Settings or perform Batch/Resend/Revoke/Regenerate, Then the action is denied with an explanatory message and no change is made. Given a permitted user edits any SafeLink configuration (expiry, OTP, delivery window, resend rules, access scopes, templates), When they save, Then an immutable change log entry is created capturing user ID, timestamp, field(s) changed, previous value, and new value. Given I open the Change Log, When I filter by date range, user, or configuration area, Then only matching entries are displayed and entries are read-only. Given I export the Change Log, When I choose CSV and a date range, Then a CSV containing the filtered entries and column headers downloads successfully.
Security and Compliance Hardening
"As a compliance officer, I want SafeLink OTP to meet our security and HIPAA obligations so that we can share updates without increasing regulatory risk."
Description

Enforce TLS 1.2+ with HSTS, signed/expiring tokens (JWT or equivalent) with rolling keys, at-rest encryption for link metadata, and secrets management. Implement brute-force protections, WAF rules, and anomaly detection on OTP attempts and link opens. Ensure HIPAA-aligned practices: minimum necessary data in messages and views, BAAs with providers, documented retention and access controls, and regular vulnerability scans and penetration tests. Provide incident logging with tamper-evident storage and playbooks for key rotation and revocation.

Acceptance Criteria
TLS/HSTS Enforcement for SafeLink OTP Links
Given any request to a SafeLink OTP URL or API endpoint over HTTP (port 80) When the request is received Then it is redirected with 301 to HTTPS and no response body contains sensitive data Given any HTTPS connection to SafeLink OTP domains When the TLS handshake occurs Then only TLS 1.2 or 1.3 is negotiated and connections attempting <1.2 are refused Given a successful HTTPS response from SafeLink OTP domains When headers are inspected Then Strict-Transport-Security is present with max-age >= 15552000, includeSubDomains, preload Rule: The public endpoint configuration earns “A” or better on Qualys SSL Labs with no weak ciphers, no TLS compression, and OCSP stapling enabled Rule: Certificates are valid, not expired, and use SHA-256 or stronger
Expiring Signed Tokens with Rolling Keys
Given a generated SafeLink URL When decoded Then the token contains jti, aud, sub/linkId, iat, nbf, exp <= 24h and is signed with RS256 or ES256 (alg=none rejected) Given system time within ±5 minutes of the signer When a token is presented before nbf or after exp Then access is denied with 401 and an error code indicating not_yet_valid or token_expired and no link content is rendered Given a token that has been successfully used once (post-OTP) When the same token is presented again Then the request is denied with 410 Gone and no content is rendered; a duplicate-use event is logged Given a scheduled key rotation event When rotation occurs Then verifiers accept tokens from the new key and the immediately previous key for a configurable grace window (default 24h) without downtime Given an administrator revokes a link or jti When the revoked token is presented Then verification fails within 5 minutes across all nodes and is recorded in the audit log
At-Rest Encryption and Secrets Management
Rule: All SafeLink metadata (phone numbers, message templates, link state, view receipts) is encrypted at rest using AES-256-GCM via cloud KMS-managed keys Rule: Database files, snapshots, and backups for these datasets are encrypted with the same or stronger keys; restore tests verify decryption monthly Rule: KMS keys are rotated at least annually or upon incident; key usage and access are logged and restricted to least-privileged service roles Rule: Application secrets (SMS provider creds, WAF tokens, signing keys) are stored in a managed secrets vault, not in code or SCM; access via IAM policies; retrieval is over TLS and audited Rule: High-value secrets are rotated at least every 90 days; stale or disabled secrets cannot be used to send messages or sign tokens Rule: Direct storage access without KMS context cannot yield plaintext (verified by attempting reads from snapshots outside the service role)
Brute-Force, WAF, and Anomaly Detection on OTP Attempts
Rule: OTP entry is rate-limited per link and per IP/device: default max 5 failed attempts per 15 minutes; exceeding threshold locks the link for 30 minutes and requires issuer re-send to unlock Rule: A per-origin rate limit of 60 requests/minute (burst 120) is enforced on SafeLink view endpoints; overages receive 429 with Retry-After Rule: WAF blocks known malicious patterns (OWASP Top 10 payloads, path traversal, user-agent anomalies); blocked requests return 403 and are logged Rule: Anomaly detection flags distributed guessing (≥10 links targeted from one ASN in 10 minutes) or geo-velocity anomalies; alerts page on-call within 5 minutes and auto-enables stricter limits Rule: All OTP attempts and link opens include a correlation ID in tamper-evident logs with timestamp, IP (geo-approx), user-agent fingerprint, and disposition
Minimum Necessary Data in Messages and Views (HIPAA)
Rule: SMS templates for SafeLink exclude PHI/PII beyond agency name and recipient first name; no diagnosis, full addresses, DOB, MRN, or visit details are present in SMS body Rule: The read-only SafeLink view displays only minimum necessary fields: visit date/time window, caregiver first initial and role, high-level status, and non-sensitive summary; fields labeled PHI (diagnosis codes, DOB, full address) are redacted by default Rule: Template engine rejects disallowed placeholders in SafeLink contexts at build time and logs violations; messages with disallowed fields are not sent Rule: SafeLink viewers cannot navigate or query outside the specific link scope; direct object reference attempts return 404 without leaking identifiers Rule: Outbound vendors used for SafeLink (SMS, cloud hosting, monitoring) must have active BAAs on file; enabling a vendor integration without BAA is blocked with an admin error
Tamper-Evident Incident Logging and Key Rotation Playbooks
Rule: Security-relevant events (token issuance/verification, OTP attempts, WAF blocks, revocations, key rotations) are written to append-only storage with object lock/WORM or hash-chained logs; integrity verification can detect any modification Rule: Logs are time-synchronized (NTP) with accuracy ±1s and retained per policy (>= 6 years) with access controlled and audited Rule: Documented playbooks exist for token/key compromise, vendor breach, and mass revocation; tabletop exercises are conducted at least every 6 months with actionable follow-ups tracked to closure Rule: A key rotation can be executed without downtime; cache invalidation ensures revoked keys propagate within 5 minutes; success/failure is logged with change request ID Rule: Forensics export of relevant logs for a given linkId/jti is producible in under 1 hour on demand
Continuous Vulnerability Scanning and Pen Testing
Rule: SAST, dependency, and container scans run on every PR; builds fail on Critical/High findings unless a time-bound waiver (<14 days) is approved Rule: Weekly DAST runs against staging and monthly against production replicas; no exploitable Critical/High findings remain open beyond 7 business days (Medium 30, Low 90) Rule: An independent penetration test is conducted at least annually and after major auth changes; Critical/High findings are remediated within 30 days with retest confirmation Rule: Security headers (CSP, X-Content-Type-Options, Referrer-Policy) are present on SafeLink views and validated in CI Rule: Evidence (reports, SBOMs, waivers) is stored and linked to the release record for audit

Language Lens

Automatically translates updates into each recipient’s preferred language and reading level, with nurse‑approved phrasing and optional audio read‑outs. Includes a mini glossary for common terms (e.g., edema → swelling) to avoid alarming or confusing language. More relatives can understand and align on care without relying on a single translator.

Requirements

Recipient Communication Preferences
"As an operations manager, I want to set each recipient’s preferred language and reading level so that updates are automatically tailored and easily understood."
Description

Per-recipient profile storing preferred language(s), reading level (e.g., CEFR or grade level), dialect, tone sensitivity, audio read-out preference, and delivery channel (in-app, SMS, email). Integrated with CarePulse’s contact directory and care teams, this profile is applied at send time to select the correct translation, simplify phrasing to the appropriate reading level, apply nurse-approved glossary substitutions, and attach audio read-outs when requested. Supports fallbacks (e.g., if a dialect is unavailable, use the base language), per-message overrides, and consent tracking. Syncs across mobile and web, enforces role-based access controls, and records an audit trail for profile edits.

Acceptance Criteria
Create and Edit Recipient Communication Preferences
Given an existing recipient in the CarePulse contact directory and the user has edit permission When the user opens the recipient’s Communication Preferences, sets primary/secondary language(s), reading level (CEFR A1–C2 or grade 1–12), dialect (or None), tone sensitivity (Low|Medium|High), audio read-out (On|Off), and delivery channel (In-app|SMS|Email), and saves Then required fields are enforced (primary language and delivery channel are mandatory), values must be within allowed enumerations/ranges, and invalid entries show inline errors without saving And the saved values persist and are retrievable via UI and API for that recipient And the preferences are visible to members of the recipient’s assigned care team with view permission
Apply Preferences at Send Time
Given a composed update is being sent to a recipient with stored Communication Preferences When the system generates and dispatches the message Then it selects the translation in the preferred dialect; if unavailable, it does not send yet and defers to fallback logic And it simplifies phrasing to the stored reading level, achieving the target readability within ±1 grade or ±1 CEFR band And it applies nurse-approved glossary substitutions for clinical terms And it adjusts tone according to tone sensitivity (e.g., softens alarming phrases when High) And it attaches an audio read-out when Audio = On And it uses the selected delivery channel (In-app|SMS|Email) And the final rendered preview and delivered artifact match these selections
Dialect Fallback Behavior
Given a recipient’s preferred dialect lacks a translation for the current message When the system prepares the message Then it uses the base language translation for that language (not another dialect or different language) And it records a fallback event including message ID, recipient ID, requested dialect, used base language, and UTC timestamp And it preserves the recipient’s reading level, glossary substitutions, tone sensitivity, and audio settings
Per-Message Overrides
Given a user with send permission composes a message to a recipient with stored preferences When the user applies per-message overrides (language/dialect, reading level, tone, audio, delivery channel) and sends Then the overrides apply only to that message; stored recipient preferences remain unchanged And the message metadata records the overrides, actor, and UTC timestamp And overrides that violate consent or RBAC are blocked with a clear, actionable error and are not sent
Consent Tracking and Enforcement
Given a recipient has not provided consent for a delivery channel or has withdrawn consent When a user attempts to send a message via that channel Then the send is blocked, an explanatory error is shown, and no content is transmitted And the user may initiate a consent request via permitted channels And when consent is captured, the system stores consent status, channel, source, actor, UTC timestamp, and version/hash of consent text And future sends respect the latest consent state without requiring user re-entry
Cross-Platform Sync (Mobile and Web)
Given a user updates a recipient’s Communication Preferences on mobile while online When the user saves Then the updated values are visible in web UI and API within 10 seconds, and vice versa for web-to-mobile And if offline, changes are queued locally and sync automatically within 10 seconds of reconnect, preserving field-level edits And concurrent edits resolve with deterministic last-write-wins at field level and notify the later saver of overwritten fields
RBAC and Audit Trail for Profile Edits
Given roles: Admin and Care Manager can edit; Caregiver and External Contact are read-only When users attempt to view or edit Communication Preferences Then only authorized roles can create/update/delete; unauthorized edit attempts return 403 in API and disabled controls in UI And every successful or failed edit attempt writes an immutable audit record with recipient ID, actor ID and role, fields changed with before/after values, UTC timestamp, reason (if provided), and source client (web/mobile/API) And authorized roles can query and export audit records by recipient and date range
Context-Aware Translation & Simplification
"As a caregiver, I want my update translated and simplified per recipient automatically so that I can communicate once and everyone understands accurately."
Description

Server-side service that translates outbound updates (visit summaries, schedule changes, medication reminders) into each recipient’s preferred language and simplifies text to the configured reading level without losing clinical accuracy. Applies nurse-approved phrase library and mini glossary to replace clinical terms with plain-language equivalents and avoid alarming wording. Preserves structured placeholders (names, dates, vitals) and units, detects negations, and handles time-sensitive content. Batch-processes multi-recipient messages, supports retry logic and rate limiting, and returns per-recipient confidence scores. Integrates with CarePulse messaging APIs and adheres to HIPAA-compliant data handling, encryption, and logging standards.

Acceptance Criteria
Per-Recipient Language and Reading Level Translation with Nurse-Approved Phrasing
Given an outbound update containing clinical terminology and recipients with languagePreference and readingLevel configurations When the service translates and simplifies the message Then each output is in the recipient's preferred language And the output readability score is at or below the configured reading level (e.g., Flesch-Kincaid grade <= recipient.readingLevel) And nurse-approved phrase library and glossary mappings are applied consistently And clinical intent is preserved with a semantic similarity score >= 0.90 compared to the source And no terms flagged as alarming in the nurse-approved list appear in the output
Structured Placeholder and Unit Preservation
Given a message containing structured placeholders (e.g., {patient_name}, {visit_date}, {bp_sys}, {bp_dia}), numeric values, and units (e.g., mg, mL, bpm) When the service performs translation and simplification Then all placeholders remain intact and unaltered in the output And numeric values are unchanged And measurement units are preserved and not mistranslated or converted And placeholders are not reordered relative to adjacent text
Negation and Time-Sensitive Content Integrity
Given sentences that include negations (e.g., "no fever", "denies pain") and time-sensitive instructions with absolute timestamps (ISO 8601 with timezone) and relative time phrases (before/after/until) When the service translates the content Then negations remain negations with no polarity reversal And absolute date/time values and timezone offsets are preserved exactly And relative time relationships are preserved (before/after/until remain semantically correct)
Batch Processing with Rate Limiting, Retries, and Confidence Scores
Given a batch request containing up to 1,000 recipients with mixed language and reading level preferences and a unique requestId When the service processes the batch Then it respects the configured global rate limit and per-tenant quotas And retries transient failures up to the configured maximum attempts using exponential backoff And returns per-recipient status (success|failed|deferred) and a confidence score in the range [0.0, 1.0] And processing continues for remaining recipients when one fails (no whole-batch failure) And the batch completes within the configured SLA (e.g., p95 end-to-end latency <= 5 seconds for 1,000 recipients)
Messaging API Integration and Idempotency
Given a POST request to /v1/translate with a valid payload matching the published schema and a unique requestId When the request is processed Then the response conforms to the published schema (including outputs[], confidence, and errors[] fields) And the operation is idempotent for duplicate requests with the same requestId within the idempotency window, returning the same result without reprocessing And a correlationId is included in the response and propagated to logs and downstream events And invalid payloads result in HTTP 400 with machine-readable error codes; unauthorized or forbidden requests return 401/403 respectively
HIPAA-Compliant Data Handling and Audit Logging
Given PHI-bearing content handled by the service When data is transmitted and stored Then transport encryption uses TLS 1.2+ and data at rest is encrypted using AES-256 or stronger And access is restricted by least privilege with role-based controls; all accesses are logged with actor identity and purpose And application logs exclude PHI content; audit logs capture timestamp, actor, action, requestId, correlationId, and outcome And audit events are retained per policy (e.g., >= 6 years) and are exportable for compliance reviews And the service passes HIPAA-aligned security checks in static analysis, dependency scanning, and runtime vulnerability scans
Nurse-Approved Phrase Library & Mini Glossary
"As a clinical supervisor, I want to curate and control plain-language mappings for clinical terms so that communications stay accurate and non-alarming."
Description

Centralized, versioned repository of clinically reviewed phrases and term mappings (e.g., edema → swelling) per language and dialect, with context tags (symptoms, medications, risks) and contraindicated phrasing lists. Includes an editorial workflow for nurses to propose, review, approve, and publish updates, with rollback and change history. Integrates with the translation engine to enforce substitutions and tone guidance at render time. Supports agency-level customization, synonyms, examples, and inline tooltips for recipients to ensure consistent, reassuring wording across all messages.

Acceptance Criteria
Versioned Phrase Repository: Create, Retrieve, and Defaults
- Given no existing entry for 'edema' in es-MX, When a nurse editor submits a new phrase with required fields (clinicalTerm, layTerm, languageDialect, contextTags, toneGuidance, readingLevel, effectiveDate), Then the system stores it as version 1 with status Draft and records author and timestamp. - Given multiple Published versions of a phrase exist with different effectiveDates, When the translation engine requests the phrase without an explicit version, Then the latest Published version whose effectiveDate <= request time is returned. - Given a request includes an explicit versionId, When retrieving the phrase, Then exactly that version is returned even if newer versions exist, and the response includes versionId and status. - Given a save attempt omits any required field, When the editor tries to save, Then validation fails and a field-level error message is shown. - Given a search by clinicalTerm 'edema' or tag 'symptoms', When querying the repository, Then relevant entries are returned within 300 ms p95 with correct filtering by language/dialect.
Context-Tagged Substitution and Tone Enforcement at Render
- Given a draft message contains the term 'edema' tagged context=symptoms and the recipient preference is en-US at readingLevel=Grade6-8, When the message is rendered, Then 'edema' is replaced with the approved lay term 'swelling' from the phrase library at the selected reading level and dialect. - Given toneGuidance specifies 'reassuring' for the selected phrase, When rendering, Then the output phrase includes the guided tone (e.g., 'mild swelling' if severity=mild) and excludes any prohibited modifiers listed for that context. - Given the recipient dialect is es-MX and an es-MX variant exists, When rendering, Then the es-MX variant is used; When no dialect variant exists, Then the system falls back to language-wide (es) and logs a fallback event with phraseId and version. - Given one or more substitutions occur, When the message is sent, Then the message metadata includes a list of substitutions with phraseId, versionId, and whether a fallback occurred.
Contraindicated Phrasing Detection and Blocking
- Given a message includes a phrase listed as contraindicated for the recipient's language/dialect, When attempting to send, Then sending is blocked and the UI presents nurse-approved alternatives from the library. - Given a user with Approver+Override permission provides a mandatory justification, When confirming override, Then the message can be sent and the override is logged with userId, timestamp, contraindicatedPhraseId, replacement used (if any), and justification. - Given the contraindicated list is updated, When re-validating a draft, Then newly contraindicated phrases are flagged immediately. - Given the standard contraindication test suite of N cases, When executed, Then 100% of listed contraindicated phrases are detected and 0% of allowed phrases in the control set are falsely blocked.
Nurse Editorial Workflow: Propose, Review, Approve, Publish
- Given roles are configured (Contributor, Reviewer, Approver), When a Contributor creates or edits a phrase entry, Then its status is Draft and they cannot publish. - Given a Draft entry is submitted for review, When at least one Reviewer approves and at least one Approver approves, Then the status becomes Published; partial approvals do not change status. - Given an entry in Published status is edited, When changes are saved, Then a new version is created in Draft and the previous Published version remains immutable. - Given invalid transitions (e.g., Published -> Draft without versioning), When attempted, Then the system blocks the action and displays an error. - Given an entry moves between statuses, When viewing its history, Then the audit log shows actor, action, timestamp, diff summary, and rationale.
Rollback and Change History Auditability
- Given phrase X has versions v1 (Published) and v2 (Published), When an Approver with Manage Versions triggers rollback to v1 with a reason, Then v2 becomes Deprecated, v1 becomes Published, and all subsequent renders use v1 immediately. - Given a phrase's history is requested, When exporting audit data, Then a CSV and JSON file can be downloaded containing version numbers, status changes, diffs of fields, approvers/reviewers, and timestamps. - Given an unauthorized user attempts rollback, When the action is attempted, Then the system denies the action and logs the attempt.
Agency-Level Customizations and Guardrails
- Given a global base mapping 'edema'→'swelling' v1.2 and Agency A creates a localized synonym 'edema'→'hinchazón' for es-MX, When Agency A renders messages for es-MX recipients, Then the agency-specific term is used; other agencies continue to use the global mapping. - Given precedence rules (Agency > Global; Dialect-specific > Language-wide; EffectiveDate <= now), When conflicting entries exist, Then the system resolves using that precedence and records which layer was selected in metadata. - Given an agency attempts to remove a contraindicated flag or add a prohibited phrase, When saving the customization, Then the save is blocked with an error and guidance to request a global change. - Given Agency A updates a customization and submits for review, When it is approved within the agency workflow, Then it is Published for that agency without affecting other agencies. - Given the global entry updates to v1.3, When Agency A has an override, Then the system notifies Agency A of potential conflicts and allows rebase; until action is taken, the existing agency override remains effective.
Synonyms, Examples, and Inline Tooltips
- Given a glossary entry includes synonyms and example sentences for en-US and es-MX, When viewing the entry, Then required fields are present and validations enforce at least one example per language/dialect. - Given a rendered message includes a term with a glossary entry and tooltips are enabled for the recipient, When the recipient taps or hovers the term, Then an inline tooltip appears showing the approved lay definition and one example, localized to the recipient's language/dialect, and disappears on tap outside; tooltip content is accessible with screen readers via aria attributes. - Given tooltips are disabled by the recipient or agency policy, When rendering, Then no tooltip markers are inserted. - Given a glossary entry is updated to change the definition, When rendering messages referencing that entry, Then the latest Published definition is displayed in the tooltip and the metadata records glossaryEntryId and version used.
Multilingual Audio Read-Outs
"As a family member with low vision or low literacy, I want an audio read-out of updates in my language so that I can follow care without needing to read."
Description

Text-to-speech generation for each translated message using high-quality, nurse-approved voices per language, with adjustable speed and optional SSML for correct pronunciation of names and medications. Provides tap-to-play in-app, audio attachments for email/SMS where supported, and a phone-call fallback for recipients without smartphones. Caches audio on-device with secure storage and automatic expiration, supports offline generation for common phrases, and logs playback events for audit and analytics. Respects accessibility settings and includes captions/transcripts for compliance.

Acceptance Criteria
In-App Tap-to-Play Multilingual TTS
Given a translated message exists in the recipient’s preferred language and nurse-approved phrasing When the recipient taps the in-app play button Then audio plays in that language using the nurse-approved voice for that language And initial playback begins within 1.5 seconds if cached or within 5 seconds if generated online under a good connection And play/pause and seek controls function with time display accurate within ±1 second And captions/transcript matching the spoken content are available and toggleable And playback respects device accessibility settings (e.g., audio ducking, reduced motion) and pauses on audio focus loss, resuming correctly when focus returns
Adjustable Speed and SSML Pronunciation
Given the playback screen is open When the recipient selects a playback speed of 0.75x, 1.0x, 1.25x, or 1.5x Then the speed change takes effect within 200 ms without pitch distortion And the selected speed persists for subsequent playbacks for that user on that device Given SSML tags are present for names or medications When TTS audio is generated Then SSML tags (e.g., phoneme, say-as) are applied and a capability flag is recorded in metadata And if SSML is unsupported for the selected language/voice, generation falls back to default pronunciation and logs the limitation
Email/SMS Audio Attachment Delivery
Given a recipient has email or SMS/MMS enabled When a translated message with audio is sent Then, where supported, an MP3 (44.1 kHz mono) ≤ 2 MB is attached with a filename including the language code (e.g., _es-MX) And where attachments are unsupported, an expiring signed HTTPS link (TTL 7 days) to the audio is included instead And the message body includes a transcript snippet and link to the full transcript And delivery status (queued, sent, delivered, failed) with timestamps is recorded per recipient And the audio or link plays on mobile without requiring login while enforcing tokenized access and HTTPS
Phone-Call Fallback Delivery
Given a recipient is designated as non-smartphone or prefers phone-call delivery When a message is sent Then an automated call is initiated within 2 minutes using a localizable nurse-approved voice in the appropriate language And the call provides DTMF options: 1=Replay, 2=Slow down, 3=Repeat last sentence, 9=End And up to 3 attempts occur over 30 minutes if unanswered, respecting configured quiet hours And call outcomes (answered, duration, DTMF interactions, voicemail detected) are logged to the audit trail And no PHI is spoken on voicemail unless an explicit consent flag is present
Secure On-Device Audio Caching with Auto-Expiration
Given an audio file is generated When it is stored on-device for reuse Then it is encrypted at rest using platform-secure storage (Android EncryptedFile/Keystore, iOS File Protection) with AES-256 equivalent And cache entries auto-expire and are purged after 30 days by default (configurable per org) and on user logout or remote wipe And least-recently-used eviction runs when free space is low to maintain target thresholds And cached audio is playable offline only within the authenticated app context; direct file access by other apps is denied And filenames/paths contain no PHI and integrity is verified via checksum prior to playback
Offline Generation for Common Phrases
Given the device is offline and the message includes phrases from the approved offline phrase pack for the target language When audio is requested Then local TTS generates audio within 2 seconds per 10 seconds of speech for supported phrases using the correct language voice And unsupported segments are queued for cloud generation and optionally replaced with a spoken placeholder if configured And upon connectivity restoration, queued audio is generated, replaces placeholders seamlessly, and the user is notified And offline phrase packs are updatable over-the-air and verified via cryptographic signature
Playback Event Logging and Analytics
Given a recipient interacts with audio via in-app player, email/SMS link, or phone call When playback starts, pauses, resumes, completes, fails, or the speed changes Then an event is recorded with UTC timestamp, message ID, pseudonymous recipient ID, language, channel, device type, playback duration, and result code And events are stored securely, queued locally if offline, and delivered within 5 minutes of connectivity And audit views surface events within 10 minutes with filters for date range, language, channel, and recipient And analytics collection respects org/recipient opt-outs and excludes opted-out entities
In-Context Translation Preview & Approval
"As a caregiver, I want to preview and, when needed, get a nurse to approve translations before sending so that I avoid errors or confusing language."
Description

Sender-facing preview that renders how a message will appear for each recipient, showing language, reading level grade, and highlighted glossary substitutions. Enables quick edits, per-recipient notes, and routing to a nurse for approval when confidence is low or content matches sensitive categories. Supports batch approval, scheduled send, and mobile-friendly layouts with side-by-side or carousel views. Integrates with CarePulse notification workflows and enforces approval gates for critical communications.

Acceptance Criteria
Per-Recipient Preview Shows Language, Grade, and Glossary Highlights
Given a sender composes a message and selects recipients with different preferred languages and reading levels When the sender opens the In-Context Translation Preview Then a panel renders per recipient showing the recipient’s language label/locale, computed reading grade, translated text, and highlighted glossary substitutions And selecting a highlight reveals the original term and plain-language definition And an audio read-out control is available per recipient when audio is enabled And the preview loads within 2 seconds for up to 10 recipients
Inline Edits Recalculate Grade and Preserve Overrides
Given the sender is viewing a recipient’s preview When the sender edits the message content or adds a per-recipient note Then the preview updates within 500 ms to reflect the changes And the reading grade recalculates and displays the new score And glossary highlights persist unless the sender explicitly overrides a substitution And an Edited badge appears for that recipient until changes are saved
Low-Confidence or Sensitive Content Requires Nurse Approval
Given at least one recipient’s translation confidence is below the configured threshold or the content matches a sensitive category When the sender attempts to schedule or send the message Then the action is blocked and a Requires Nurse Approval banner lists the affected recipients and reasons And the sender can route the message to a nurse approver with an optional note And the system records an audit entry with requester, approver, timestamps, confidence scores, and matched categories
Batch Approval with Partial Outcomes
Given a nurse approver opens the approval queue with multiple pending items When the approver selects multiple messages and chooses Approve or Request Changes Then all selected items are processed in one action with per-item success/failure feedback And partial approvals are allowed without blocking others And comments are required when Request Changes is chosen and are stored with the audit trail
Scheduled Send with Approval Dependency and Timezone Respect
Given the sender schedules a message for a future datetime And all recipients for that message have an Approved status When the scheduled time is reached Then the message is sent via CarePulse notifications in each recipient’s preferred language And if any recipient remains unapproved at send time, only approved recipients are sent and the sender is notified of exceptions And scheduling respects each recipient’s timezone and configured quiet hours And the sender can cancel or reschedule prior to send with changes reflected immediately
Mobile-Friendly Preview: Carousel and Side-by-Side with Accessibility
Given the sender opens the preview on a mobile device When the viewport width is 414 px or less Then a carousel shows one recipient per slide with swipe navigation and sticky action controls And when the viewport width is 768 px or greater, up to three recipient panels render side-by-side with equalized heights And all interactive elements are keyboard and screen-reader accessible with labels for language, reading grade, and glossary highlights And initial paint occurs within 1.5 seconds on a mid-tier device and interactions respond within 100 ms
Delivery Analytics & Compliance Audit Logging
"As an operations manager, I want a complete audit trail and analytics for translated communications so that I can prove compliance and improve clarity over time."
Description

End-to-end logging of original text, translation versions, glossary rules applied, approver identity, timestamps, recipient preferences used, delivery status, read receipts, and audio playback events. Generates audit-ready, one-click reports linked to specific visits or incidents, integrating with CarePulse’s compliance reporting. Implements fine-grained access controls, retention policies, and export to CSV/PDF with redaction where required. Provides dashboards to track comprehension proxies (e.g., replay count) and surface languages or terms that need additional glossary refinement.

Acceptance Criteria
End-to-End Translation and Delivery Event Logging
Given a Language Lens message is created for a specific visit When it is translated, approved, sent, delivered, read, and optionally played as audio Then a single correlation_id links original_text, all translated_text versions, glossary_terms_applied, approver_id, recipient_preferences (language, reading_level, audio_preference), event timestamps (created_at, translated_at, approved_at, sent_at, delivered_at, read_at), audio playback events (play_count, total_play_duration_ms), delivery_status, and error details if any And the log is append-only and immutable And each event becomes queryable within 30 seconds of occurrence And required fields are non-null; otherwise an error_event is written with correlation_id and validation details
One-Click Audit Report for a Visit or Incident
Given a user with Compliance Officer role selects a visit_id or incident_id When they click Generate Audit Report Then a report renders within 10 seconds containing all associated messages and events with actor identities, timestamps, delivery outcomes, read receipts, audio playback counts/durations, recipient preferences used, and glossary terms applied And the report includes a translation versions diff for each message and a timeline view And the report is assigned the related correlation_ids and a generation timestamp and is stored for future retrieval And the report is accessible from the CarePulse compliance reporting module via a deep link to the same visit or incident
Fine-Grained Access Controls for Logs and Reports
Given role-based access control policies are configured When any user attempts to view or export delivery logs or audit reports Then access is granted only if the user’s role has permission for the requested scope (tenant/patient/visit) and data class (PHI vs operational) And unauthorized requests return HTTP 403 and are logged with user_id, timestamp, resource, and reason And caregivers can view logs for their assigned visits only with PHI fields redacted by default And compliance officers can view unredacted data; external auditors see redacted-by-default unless explicitly granted unredacted permission
Retention Policy Enforcement and Legal Hold
Given a tenant retention policy of 2190 days and at least one incident under legal hold When the nightly retention job runs Then delivery logs and stored reports older than 2190 days without legal hold are permanently purged within 24 hours, while items under legal hold are retained And a tamper-evident purge_receipt is written with job_id, counts_purged, counts_retained_on_hold, and id ranges, and is visible to Compliance Officer role And purged items no longer appear in queries, dashboards, or exports, and are not included in future backups
CSV/PDF Export with Role-Based Redaction
Given a user selects a date range and filters (language, visit_id, recipient_id) When they export delivery logs to CSV or PDF Then the CSV export completes within 60 seconds for up to 100,000 rows and includes columns: correlation_id, visit_id, recipient_id, original_text, translated_text, glossary_terms_applied, approver_id, created_at, translated_at, approved_at, sent_at, delivered_at, read_at, delivery_status, audio_play_count, audio_play_duration_ms, error_code And the PDF export completes within 5 seconds for up to 200 pages and includes headers, footers, and section summaries And PHI fields are redacted per role; redacted values display as "[REDACTED]" in both CSV and PDF And when a password is provided, the PDF is encrypted (AES-256) and requires the password to open And the number of exported records matches the on-screen total for the applied filters
Comprehension Dashboard Metrics and Glossary Insights
Given the analytics dashboard timeframe is set to the last 30 days When the dashboard loads or filters (language, caregiver_team, reading_level, visit_type) are changed Then charts and KPIs update within 3 seconds to show delivery rate, read rate, average time-to-read, audio play rate, average replays per message, top 10 languages by volume, and top 20 glossary terms by usage And messages with replay_rate > 2.0 or median time-to-read > 12 hours are flagged and ranked, with links to underlying messages and glossary entries And dashboard counts match underlying event logs within 1% and refresh at least every 15 minutes And users can export the visible dashboard data to CSV/PDF with the same filters applied

Next‑Step Timeline

Shows a simple, forward‑looking timeline of upcoming visits, key care goals, and expected milestones, plus brief “How you can help” tips. Sets expectations about what happens next (e.g., therapy tomorrow, new medication pickup) and flags any dependencies. Families feel prepared and engaged, which reduces last‑minute questions and missed tasks.

Requirements

Unified Timeline Orchestration
"As an operations manager, I want all upcoming care activities consolidated into one timeline so that I can spot gaps and ensure on‑time, compliant visits."
Description

Aggregate and normalize upcoming visits, care goals, milestones, tasks, medication events, and route data into a single forward‑looking timeline. Provide a canonical event model with start/end times, owners, dependencies, and status, fed by existing scheduling and care plan modules. Expose a timeline service API for mobile and web clients, with pagination, filtering (by patient, caregiver, timeframe), and real‑time updates via lightweight subscriptions. Ensure data freshness with incremental sync and conflict handling, and map events to compliance metadata so the timeline can feed audit reporting.

Acceptance Criteria
Canonical Event Model Normalization and Ordering
Given the timeline service ingests upcoming items from scheduling, care plan, medication, tasks, milestones, and routing modules for the next 30 days When a client requests the timeline for a specific patient Then the response contains only future events (startTime >= now) conforming to the canonical schema with fields: id, type, title, description, startTime (ISO 8601 UTC), endTime (ISO 8601 UTC), ownerId, ownerRole, patientId, source, status ∈ {scheduled, confirmed, in_progress, completed, canceled, blocked}, dependencyIds[], location, complianceTags[], lastUpdatedAt (ISO 8601 UTC), version (monotonic) And events are ordered by startTime ASC, then priority DESC (if present), then id ASC with stable ordering across pages And items with the same externalRef and overlapping time window are merged into a single event with a deterministic id and aggregated source metadata And 100% of events validate against the published JSON Schema; otherwise the API returns an error response indicating schemaValidationError and no partial results
Timeline API Pagination and Stable Cursors
Given more than 50 future events match a query When the client calls GET /timeline with pageSize=50 Then the response contains exactly 50 events, ordered deterministically, and includes nextCursor and hasMore=true When the client calls GET /timeline with cursor={nextCursor} Then the response contains the next non-overlapping 50 events in the same order basis And the final page returns hasMore=false and nextCursor=null Given pageSize > 200 When the client requests the page Then the API caps pageSize to 200 and returns pageSizeApplied=200 Given events are created/updated between pages When the client paginates using provided cursors Then no events are duplicated or skipped (cursor stability by version ordering)
Filtering by Patient, Caregiver, and Timeframe
Given the client supplies patientId, caregiverId, startTime >= X, endTime <= Y When GET /timeline is called with these filters Then only events matching all provided filters are returned (AND semantics) And the time window is inclusive of boundaries [X, Y] and evaluated in UTC Given invalid filter values (e.g., endTime < startTime, malformed IDs) When the request is made Then the API returns HTTP 400 with details per invalid parameter Given no events match the filters When the request is made Then the API returns an empty array with hasMore=false
Real-time Subscription Updates and Resilience
Given a client subscribes to timeline updates for patientId and timeframe via a lightweight subscription When an event within the subscription scope is created, updated, or deleted Then a change message (eventId, changeType ∈ {upsert, delete}, version, payload for upsert) is delivered within 5 seconds p95 of persistence And messages are ordered by version and are idempotent; duplicates carry the same version and can be deduplicated client-side Given a transient disconnect When the client reconnects providing lastReceivedVersion Then the service delivers all missed changes >= lastReceivedVersion in order before resuming live updates And the connection provides heartbeats every 30 seconds; absence of heartbeats for 60 seconds triggers a reconnect hint message
Incremental Sync and Conflict Resolution
Given a client holds a lastSyncedVersion When it calls GET /timeline/changes?sinceVersion={lastSyncedVersion} Then the service returns all changes (upserts and deletions) in ascending version order with a nextVersionCursor Given concurrent edits to the same event from mobile and back office When both changes are processed Then conflicts are resolved deterministically: if sources differ, apply precedence scheduling > care_plan > medication > tasks > routing > sensor > mobile_ad_hoc; if same source, the newer lastUpdatedAt wins; ties break by higher version then lexicographic id And the winning state is persisted with conflictResolved=true and prior state captured in an audit record Given a client write loses a conflict When the write is rejected Then the API returns HTTP 409 with resolution metadata including winningSource, winningLastUpdatedAt, and clientShouldRefresh=true
Dependency Flagging and Status Propagation
Given an event B depends on event A (dependencyIds includes A) When A is canceled or not completed by its endTime Then B status becomes blocked with dependencyReason populated When A completes successfully Then B status automatically transitions from blocked to scheduled (or its prior non-blocked status) within 60 seconds and dependencyReason is cleared Given a proposed dependency creates a cycle When the dependency is saved Then the API rejects the write with HTTP 400 and errorCode=cyclicDependency
Compliance Metadata Mapping for Audit Readiness
Given timeline events are produced When events are returned via the timeline API Then each event includes compliance metadata: complianceCodes[], requiresSignatures (boolean), payerId (if applicable), documentationArtifacts[], complianceComplete (boolean) Given a completed event that requires compliance artifacts When the event is marked completed Then complianceComplete=true only if all required documentationArtifacts are attached and valid; otherwise complianceComplete=false and missingArtifacts enumerated When the audit feed endpoint is requested for a timeframe Then 100% of completed events in that window are present with eventId, complianceCodes, completionTimestamp, and artifact references sufficient for one-click audit report generation
Care Goal & Milestone Management
"As a clinician, I want to set and track care goals with expected milestones so that the team and family understand what success looks like and when to expect it."
Description

Enable clinicians to define, edit, and track patient‑specific care goals and expected milestones with target dates, progress indicators, and evidence links (e.g., last PT assessment). Display milestones inline on the timeline, automatically adjust their status based on incoming documentation and sensor signals, and surface slippage risk when targets are approaching without progress. Support templates for common conditions (e.g., post‑op, CHF) and allow agency‑level customization.

Acceptance Criteria
Define, Edit, and Track Care Goals and Milestones
Given a patient record and edit permissions, When a clinician creates a care goal with title, description, target date, progress (0–100%), and at least one milestone, Then the goal saves successfully, is timestamped with author, and appears in the patient plan within 2 seconds. Given an existing goal, When the clinician edits any field and saves, Then changes persist, a prior version is retained in the audit log, and updated_at reflects the save within 2 seconds. Given a goal with multiple milestones and assigned weights, When any milestone progress changes, Then the goal’s aggregate progress recalculates per weights within 500 ms and displays rounded to the nearest 1%. Given validation rules, When required fields are missing or invalid (e.g., target date more than 365 days in the past), Then the form blocks save and shows field-specific error messages.
Inline Milestone Display on Next‑Step Timeline
Given upcoming milestones exist, When viewing the Next‑Step Timeline, Then milestones render as entries sorted by target date ascending and show title, target date, and a progress bar. Given a milestone due within 7 days, When displayed on the timeline, Then a "Due in X days" label appears and color coding is green (≥75%), amber (25–74%), or red (<25%) as configured. Given a milestone with evidence links, When the user taps the entry, Then a detail sheet opens within 1 second showing the latest evidence with timestamp and source. Given no milestones exist for the patient, When viewing the timeline, Then an empty state appears with guidance text and a button to add from templates.
Automatic Status Updates from Documentation and Sensors
Given a new clinical note is tagged to a milestone’s evidence criteria, When the note syncs to the patient record, Then the milestone progress updates per mapping rules within 5 minutes and the note appears as evidence with provenance "Documentation". Given sensor data crosses a configured threshold linked to a milestone, When the ingestion job completes, Then the milestone progress auto-updates within 5 minutes with provenance "Sensor" and includes the data timestamp. Given conflicting inputs (sensor regression and documentation completion), When both are present, Then priority rules apply (Clinician manual completion > Documentation > Sensor) and the decision is captured in the audit trail. Given an auto-update occurred, When viewing milestone change history, Then each entry shows source, timestamp, previous value, new value, and actor/system ID.
Slippage Risk Detection and Surfacing
Given a milestone with a target date, When time to target is ≤72 hours and progress <50%, Then a "Slippage Risk" badge appears on the timeline entry and in the clinician dashboard. Given a slippage risk is active, When progress becomes ≥75% or the target date is extended beyond 7 days, Then the risk flag clears within 10 minutes and a resolution note is added to the audit trail. Given risk is flagged and notifications are enabled, When the flag is first created, Then a single actionable alert is sent to the assigned clinician and coordinator and duplicates are suppressed for 24 hours. Given family view permissions, When a risk is flagged, Then the family-facing timeline shows a brief "How you can help" tip configured for that milestone without exposing PHI beyond role permissions.
Condition Templates and Agency‑Level Customization
Given an admin user, When creating or editing a condition template with predefined goals, milestones, default target offsets, and weights, Then the template saves with a version number and is available for assignment within 2 minutes. Given a patient, When a template is applied, Then goals and milestones instantiate with target dates calculated from the configured anchor (e.g., admission date) and agency-specific defaults. Given templates are versioned, When a new template version is published, Then existing patient plans remain on their current version unless an admin explicitly migrates them with a confirmation step. Given a clinician personalizes a templated plan for a patient, When edits are saved, Then patient-level overrides are recorded without altering the base template and are visible in the audit log.
Evidence Links Attachment and Access Control
Given a milestone, When a clinician attaches evidence (document, photo, audio clip, or URL), Then the file uploads, passes virus scan, and is associated with the milestone with checksum and upload timestamp. Given role-based access controls, When a user without permission attempts to open an evidence link, Then access is denied with an explanatory message and the attempt is logged with user ID and time. Given an evidence link becomes invalid, When a 404 or access error is detected, Then the milestone shows a warning badge and the clinician is prompted to replace or remove the link. Given typical 4G connectivity, When opening an evidence link, Then the viewer loads or the download begins within 3 seconds 95% of the time.
Dependency Detection & Flags
"As a care coordinator, I want the timeline to flag unmet prerequisites so that I can resolve blockers before they cause missed or non‑compliant visits."
Description

Model and detect prerequisite relationships (e.g., prior auth, lab result, medication pickup) for timeline events. When prerequisites are unmet or at risk, flag the related visit or milestone with clear badges and callouts, and generate suggested remediation tasks. Integrate with tasking to assign owners and due times, and show dependency chains to reduce missed steps. Provide simple risk scoring based on due dates, completion status, and historical delays.

Acceptance Criteria
Flag Unmet Prior Authorization for Tomorrow’s PT Visit
- Given a scheduled PT visit at 10:00 tomorrow with a prerequisite "Prior Authorization" that is not recorded as received, When the system runs dependency checks at save and every 15 minutes thereafter, Then the visit card on the Next‑Step Timeline shows a red "Dependency" badge and a callout listing "Prior Authorization — Unmet". - And the callout is clickable/tappable to open dependency details showing status "Unmet", source "Payer", and due time "12 hours before visit" (or the org‑configured buffer if set). - And the system suggests a remediation task "Obtain prior auth for [Patient]" with default owner "Scheduler" and due time set to the earlier of the dependency due or the visit start minus the configured buffer (default 12h). - And accepting the suggestion creates the task, links it to the visit and dependency, and displays the task link on the visit card.
Risk Score and Visual Priority Scaling
- Given any visit/milestone with one or more dependencies, When the system calculates risk, Then it outputs a risk score between 0 and 100 using weighted inputs: due‑date proximity (50%), completion status (30%), historical delay rate for the dependency type (20%). - And mapping to visual state: 0–33 Green (Low), 34–66 Amber (Medium), 67–100 Red (High), displayed as a ring and badge on the timeline card. - And the risk score recalculates immediately on dependency status change and at least every 15 minutes otherwise, with changes reflected in the UI within 60 seconds. - And the calculated score and its input breakdown are persisted to the activity log with timestamp and user/system source.
Show Multi‑Level Dependency Chain for Lab‑Dependent Medication Start
- Given a medication start milestone that depends on "Lab Result: INR" which depends on "Blood Draw Appointment", When the user opens "View dependency chain", Then a tree view displays up to three levels with nodes labeled and status (Met/Unmet/At Risk/Blocked). - And unmet ancestor nodes are visually indicated as blocking descendants, with a "Blocked" tag on the medication start node. - And nodes are expandable/collapsible, and tapping a node opens its details panel with owner, due time, and linked tasks. - And the chain view supports keyboard navigation and screen readers with accessible labels (ARIA) and is usable on mobile and desktop.
Generate and Assign Remediation Tasks from Dependency Flags
- Given any unmet dependency, When the user clicks "Create remediation task" on the dependency callout, Then a task creation form opens prefilled with title, linked patient, linked visit/milestone, dependency type, and suggested owner from responsibility mapping. - And the due time defaults to the dependency due date/time; if none exists, it defaults to visit start minus the configured buffer (default 12h). - And saving creates the task in the Tasking module, assigns the owner, sends a notification to the owner, and displays the task link on both the visit card and the dependency details. - And the visit remains flagged until the dependency status is updated to "Met"; completing the task alone does not clear the flag unless it transitions the dependency status to "Met".
Auto‑Clear Flags and Recalculate on Lab Result Receipt
- Given an unmet "Lab Result: INR" dependency for a visit tomorrow, When an inbound HL7/FHIR message sets the lab result status to "Final" with value present, Then the dependency status updates to "Met", the red badge is removed, a green check appears, and the risk score is recalculated within 2 minutes. - And an activity log entry records the change with source "Lab Integration", timestamp, old/new status, and affected entities. - And subscribers (assigned owner and visit team members) receive a single consolidated notification within 5 minutes indicating the dependency is met.
Handle Unknown Status and Data Freshness for Imminent Dependencies
- Given a dependency with unknown verification status and due within the next 24 hours, When the system performs risk assessment, Then the dependency is marked "At Risk" with an amber badge and a suggested "Verify [dependency]" task is provided. - And the timeline card displays a "Data freshness" indicator if the last successful sync time for the dependency source exceeds 15 minutes, with a tooltip showing the last sync timestamp. - And if the device is offline, the timeline shows cached dependency states with an "Offline — may be out of date" banner, and queues any task creations to sync when connectivity returns.
Contextual "How You Can Help" Tips Engine
"As a family member, I want clear, concise tips about what I can do next so that I feel prepared and can support the care plan effectively."
Description

Generate brief, role‑appropriate tips for each upcoming event using rules and templates driven by the patient’s plan of care, goals, and recent activity (e.g., "Have the new inhaler ready for tomorrow’s visit"). Allow caregivers to edit or pin tips, and localize for multiple languages at a sixth‑grade reading level. Ensure tips exclude sensitive PHI when shared externally and link to simple checklists where applicable.

Acceptance Criteria
Role-Appropriate Tip Generation for Upcoming Events
Given an upcoming event with a defined plan of care, goals, and recent activity When the Next-Step Timeline loads Then 1-3 tips are generated per event using rules/templates, prioritized by rule weight, and no tip exceeds 160 characters. Given different user roles (field caregiver, family member, operations manager) When viewing the same event Then each role sees tips from its corresponding template bank and excluded templates are not rendered. Given multiple applicable rule triggers for an event When tips are generated Then only the top 3 highest-priority unique tips are returned with no duplicate intents. Given a generation request on a mid-range mobile device When executed Then median generation latency per event is <= 500 ms and p95 is <= 1000 ms.
Localization and Sixth-Grade Reading Level
Given a user's language preference (e.g., en-US, es-ES) When tips are generated Then the tips appear in that language with locale-appropriate dates/times, and if unsupported, fall back to English with a visible indicator. Given any generated or edited tip in any supported language When readability is evaluated Then the Flesch-Kincaid Grade Level is <= 6.0 and sentences avoid unexplained acronyms. Given a caregiver edits a tip in the primary language When saved Then localized versions are regenerated within 5 seconds and maintain <= 6.0 grade level.
Caregiver Edit, Versioning, and Pinning
Given a signed-in caregiver When they edit a tip and tap Save Then the change persists, an "edited by <first initial>" and timestamp display, and a previous version is stored in history. Given a non-caregiver (e.g., family member) When attempting to edit or pin a tip Then the action is blocked and a read-only state is shown. Given a caregiver pins a tip for an event When the timeline is refreshed Then that tip appears first for that event and role until unpinned, with a maximum of 3 pinned tips per event enforced. Given offline mode When a caregiver edits or pins a tip Then the action queues locally and syncs within 30 seconds of reconnecting without data loss.
External Sharing Without Sensitive PHI
Given a tip is rendered for an external audience (e.g., family portal or share link) When the content is prepared Then direct identifiers (last name, full name, DOB, MRN, full address, phone, email, insurance/member IDs, diagnosis codes, exact medication dosages) are removed or generalized, and no unredacted identifiers appear. Given PHI scanning is run on externally shared tips When evaluated Then 0 high-severity PHI entities are detected and the share is blocked with an error if any are found. Given a caregiver includes PHI in a custom edit When the tip is shared externally Then PHI is automatically redacted before display and an audit log entry of the redaction is recorded.
Checklist Linking from Tips
Given a tip maps to an existing checklist (e.g., "prepare home exercise area") When the tip is displayed Then a tappable Open Checklist link is present and opens the correct checklist within 1 second. Given a template has a checklist mapping When tips are generated across test data Then >= 90% of applicable tips include a valid checklist link with no broken links. Given a checklist is completed When returning to the timeline Then the originating tip shows a completion state within 15 seconds.
Dependency-Aware Tips and Status Updates
Given an upcoming event with a prerequisite dependency (e.g., pick up new inhaler before tomorrow 8 a.m.) When tips are generated Then one tip explicitly states the dependency and the due-by time. Given the dependency status is marked complete in the system When the timeline refreshes or receives a push update Then the related tip is updated to reflect completion or is removed within 60 seconds. Given a dependency is at risk (due time within 12 hours and unmet) When tips are generated Then the tip includes an at-risk callout and an action appropriate to the user's role (e.g., Call pharmacy).
Role‑Based Views & Family Sharing Controls
"As an agency administrator, I want role‑based timeline views and secure family sharing so that each stakeholder sees what they need without exposing unnecessary PHI."
Description

Deliver timeline views tailored for operations, field caregivers, clinicians, and family members, each with scoped data visibility. Implement HIPAA‑compliant sharing via consented invites or time‑limited secure links with PHI redaction rules. Provide per‑role filters (e.g., my patients, my shifts), and allow agencies to configure which event fields appear in family view. Log access and sharing events for audit.

Acceptance Criteria
Operations Role—Agency-Wide Timeline With Scoped Data
Given an authenticated Operations user with agency scope When they open the Next‑Step Timeline Then upcoming visits, milestones, and dependencies for agency patients are displayed according to the Operations data policy, and disallowed fields are not rendered. Given an Operations user When they attempt to access a timeline field excluded by policy Then the field value is replaced with a “Restricted” placeholder and an access_denied audit event is recorded. Given per‑role filters are enabled When an Operations user opens the filter menu Then only filters configured for the Operations role are available and caregiver‑only filters are not shown.
Caregiver Role—“My Patients/My Shifts” Timeline Scope
Given a Caregiver assigned to patients A and B and an org horizon of 14 days When they open the Next‑Step Timeline Then only events for patients A and B within the next 14 days are displayed. Given a Caregiver on scheduled shifts When the “My Shifts” filter is applied Then only events occurring during the caregiver’s scheduled shifts are shown. Given a Caregiver When they search for or navigate to an unassigned patient’s timeline Then zero results are returned, no PHI is displayed, and an access_denied audit event is recorded.
Clinician Role—Clinical Goals and Dependencies Visibility
Given a Clinician assigned to a patient’s care team When viewing that patient’s Next‑Step Timeline Then clinical goal summaries, therapy milestones, medication schedule changes, and care‑plan dependencies are visible, while billing/insurance fields are hidden. Given a Clinician not assigned to a patient When attempting to view that patient’s timeline Then access is denied and an access_denied audit event is recorded. Given clinician role policy defines which note fields are permitted When opening a visit card Then only permitted note fields are displayed; disallowed fields show a redacted placeholder.
Family Sharing—Consented Invite With Redacted Timeline
Given documented patient consent and an invite sent to family email F When F accepts the invite and verifies their email Then F can sign in and view the Family timeline for that patient. Given an agency’s family‑view field configuration When F views the timeline Then only whitelisted fields are shown and non‑whitelisted fields, including free‑text notes and voice clips, are hidden or masked. Given consent is revoked When revocation is saved Then the family member’s access is disabled within 5 minutes, subsequent requests return 403, and the event is audit logged.
Agency Settings—Configure Family-Visible Timeline Fields
Given an Admin user When opening Family View Settings Then a list of timeline fields is presented with per‑field visibility toggles and descriptions. Given an Admin changes field visibility and selects Publish Then the new configuration version is saved, audit logged with actor and timestamp, and becomes effective for all family views within 5 minutes. Given an Admin selects Preview When viewing the sample timeline Then the pending configuration is applied only to the preview and does not affect live views.
Secure Link Sharing—Time-Limited, Redacted Access Without Login
Given an authorized staff user When generating a secure link for patient P with an expiry of 72 hours Then a unique, single‑patient URL is created and cannot be used after the expiry timestamp. Given the secure link is accessed before expiry When the page loads Then the Family timeline is displayed using the current family‑view redaction configuration and no login is required. Given a secure link is revoked prior to expiry When a subsequent request uses the link Then a 410 Gone message is shown and the attempt is audit logged.
Audit Trail—Access and Sharing Event Logging
Given any timeline is viewed by a user or via a secure link When content is returned Then an audit event is recorded with fields: timestamp (UTC), actor (user ID or link token), role, patient ID, action, result, IP, and user agent. Given a share is created, updated, or revoked When the operation completes Then an audit event is recorded including share type (invite or secure link) and expiry timestamp if applicable. Given an Admin user When filtering audit logs by date range, patient, actor role, or action and exporting Then matching records are returned and the CSV export includes all listed fields and applied filters.
Smart Reminders & Digest Notifications
"As a caregiver, I want timely reminders and concise digests so that I don’t miss preparations and can manage my day efficiently."
Description

Send proactive reminders and daily/weekly digests summarizing upcoming visits, milestones, and flagged dependencies. Support push, SMS, and email with per‑user preferences, quiet hours, and escalation rules for high‑risk items. Include deep links back to the timeline and one‑tap completion for simple prep tasks, with delivery and engagement tracking to measure effectiveness.

Acceptance Criteria
Per‑User Channel Preferences & Quiet Hours Enforcement
- Given a user has set channel preferences (push, SMS, email) and a time zone, when any reminder or digest is scheduled, then only enabled channels are used and scheduling honors the user’s time zone. - Given quiet hours are configured (e.g., 21:00–07:00), when a non‑high‑risk notification falls within quiet hours, then delivery is deferred to the next allowed window. - Given quiet hours are configured, when a high‑risk notification is scheduled during quiet hours, then delivery occurs immediately per policy and is logged as a quiet‑hours override. - Given a user updates preferences or quiet hours, when the next notification is sent, then the updated settings are applied without requiring the user to sign out/in.
Daily & Weekly Digest Generation and Content Accuracy
- Given a user has selected daily or weekly digests, when the digest window triggers, then exactly one digest per frequency and time zone is generated. - Given upcoming visits, milestones, and flagged dependencies exist for the period, when the digest is sent, then it lists items with date/time, responsible party, and status with no duplicates. - Given there are zero qualifying items for the period, when the digest would send, then it is skipped or replaced by a minimal “No upcoming items” message per configuration. - Given the digest contains deep links, when a user taps a link, then they land on the corresponding timeline section after authentication (with redirect back to the target).
Proactive Visit Reminders with One‑Tap Prep Completion
- Given a scheduled visit with prep tasks, when reminder offsets are reached (e.g., 24h and 2h before), then the user receives a reminder containing a one‑tap “Mark Prep Done” action and a deep link to the visit. - Given the user taps “Mark Prep Done,” when the action is processed, then the task is marked complete within 5 seconds and subsequent reminders for that task are canceled. - Given the user performs the action while offline, when connectivity is restored within 24 hours, then the completion syncs once and duplicate reminders are not sent. - Given the user opens a reminder link while logged out, when they authenticate, then they are redirected back to the specific visit detail without losing context.
High‑Risk Dependency Escalation
- Given an item is labeled high‑risk with an escalation policy (channels, wait times, max attempts), when the initial notification is unacknowledged for the wait time, then the next channel in the policy is attempted. - Given a notification is delivered, when the user acknowledges via any defined action (open + click, reply YES, or mark complete), then escalation halts and the acknowledgment is recorded. - Given max attempts are reached without acknowledgment, when the policy defines a secondary contact, then a final escalation is sent to that contact and the incident is flagged. - Given quiet hours are active, when the item is high‑risk, then escalation can bypass quiet hours per policy and the override is auditable.
Delivery and Engagement Tracking
- Given any notification is sent, when provider callbacks are received, then status transitions (queued, delivered, failed, bounced) are recorded with timestamps per channel. - Given an open, click, action, or unsubscribe occurs, when the event is processed, then engagement is captured with user ID, channel, device (if available), and timestamp. - Given multiple events for the same notification, when they are ingested, then duplicates are deduplicated and analytics reflect the latest state within 2 minutes. - Given an authorized user queries reports, when filters (date range, channel, segment) are applied, then send volume, delivery rate, open rate, CTR, and completion rate are returned accurately.
Opt‑In, Opt‑Out, and Compliance Controls
- Given SMS and email channels, when a user is added, then explicit consent status is captured and only opted‑in channels are used for messaging. - Given a user replies STOP to SMS or clicks email unsubscribe, when the event is received, then the channel is immediately disabled for that user and a confirmation record is stored. - Given compliance requirements, when messages are sent, then required headers/footers and opt‑out instructions are included per channel and jurisdiction. - Given a previously opted‑out user requests re‑opt‑in, when they confirm via supported flow, then the channel is re‑enabled with a retained audit trail.
Channel Fallbacks and Retry Handling
- Given a notification attempt fails with a transient error, when retry policy applies, then the system retries up to the configured limit with exponential backoff for that channel. - Given a push token is invalid or an email hard bounces, when the failure is detected, then the channel is disabled for that user until refreshed and the next preferred channel is attempted automatically. - Given all channels fail or the retry cap is reached, when the incident is finalized, then the failure is logged, surfaced in the admin console, and no further retries occur. - Given retries are in flight, when the notification is eventually delivered, then only one user‑visible copy is delivered and earlier duplicates are canceled.
Offline Timeline & Conflict‑Resistant Sync
"As a field caregiver, I want the timeline to work reliably without connectivity so that I can stay on track during visits and travel."
Description

Provide offline access to the next 7–14 days of timeline data on mobile with local caching, optimistic updates, and background sync. Implement record‑level delta synchronization and conflict resolution policies (e.g., last‑writer‑wins with audit trail, or field‑level merge for notes). Clearly indicate offline state and pending changes, ensuring critical actions (acknowledging a tip, marking a task done) are queued and reliably synced.

Acceptance Criteria
Access Timeline Offline (Next 7–14 Days)
Given the device completed a successful sync within the last 24 hours When the device is offline and the user opens the Next‑Step Timeline Then the timeline displays events, goals, milestones, tips, and dependency flags for the next 14 calendar days based on the last sync without making network calls And a visible "Last updated" timestamp is shown Given the cache does not contain a full 14 days When the user is offline Then the app displays at least the next 7 days of content and labels items as "May be outdated" Given no prior successful sync exists When the user is offline and opens the timeline Then the app displays a non-blocking message "Timeline unavailable offline — connect to load" and remains stable with no crashes
Offline State & Pending Changes Indicators
Given the device loses network connectivity When the user is on the timeline screen Then an offline banner and an offline icon appear within 1 second of detection Given one or more unsynced actions exist (e.g., task done, tip acknowledged) When the user views the app header Then a "Pending (N)" badge appears with the exact count of queued actions And tapping the badge reveals a list showing action type, affected item, and queued timestamp Given connectivity is restored and all pending actions sync successfully When sync finishes Then the offline banner disappears and the pending badge clears within 5 seconds
Mark Task Done Offline
Given the device is offline and a task is visible on the Next‑Step Timeline When the caregiver taps "Mark done" Then the task status updates to Done in the UI within 200 ms and is labeled "Pending sync" Given the app is force‑closed while still offline When the app is relaunched Then the task remains marked Done with a "Pending sync" label (queued action persists) Given connectivity is restored When background sync runs Then the queued task completion is sent to the server and confirmed within 15 seconds And the "Pending sync" label is removed and a success toast appears Given the server rejects the update (e.g., 409 or validation error) When sync completes Then the UI reverts the task to its previous state, an error message is shown with a "Retry" action, and the failure is logged to the audit trail
Acknowledge Tip Offline
Given the device is offline and a "How you can help" tip is visible When the caregiver taps "Acknowledge" Then the tip shows as Acknowledged in the UI within 200 ms and is labeled "Pending sync" Given the app is backgrounded or killed and later reopened while still offline When the user returns to the timeline Then the tip remains Acknowledged with a "Pending sync" label (queued action persists) Given connectivity is restored When sync runs Then the acknowledgement is confirmed on the server within 15 seconds and the pending label is removed Given the acknowledgement conflicts with a server change When conflict resolution applies Then the client reconciles per policy and displays the final state with a subtle "Updated after sync" note
Background Sync on Connectivity Restore
Given at least one pending action exists When connectivity changes from offline to online Then background sync starts within 5 seconds and attempts to process all queued actions And failures retry with exponential backoff up to 24 hours or until success Given no pending actions exist and the cache is older than 60 minutes When connectivity is available and the app is foregrounded or in an allowed background state Then a delta refresh runs to update the next 7–14 days without blocking user interaction Given the user taps "Sync now" When connectivity is available Then a manual sync begins immediately and supersedes the next scheduled background sync
Record‑Level Delta Synchronization
Given the client holds lastSyncToken=T1 and 5 records changed since T1 When sync runs Then the client requests deltas since T1 and receives exactly those 5 changed records And unchanged records are not transferred or rewritten locally Given no changes exist since lastSyncToken When sync runs Then the server returns an empty delta set (or 204) And the client performs no local mutations and marks sync as up‑to‑date Given 500+ timeline records exist and <=10 changed since last sync When delta sync runs over LTE Then the payload size is proportional to the 10 changes (no full list) and sync completes within 3 seconds
Conflict Resolution & Audit Trail
Given two users edit the same task status within 2 minutes When both updates reach the server Then last‑writer‑wins is applied to the status field based on server timestamp precision (>= milliseconds) And the losing change is recorded in the audit trail with user ID, timestamp, original value, and new value Given two users edit different fields (e.g., due time and assignee) on the same record When sync occurs Then both changes are preserved (field‑level merge) Given two users edit the notes field concurrently When sync occurs Then field‑level merge concatenates both notes with author, timestamp, and separator in chronological order And the client shows a subtle "Merged after sync" indicator on the note Given any conflict is auto‑resolved When the user opens the item details Then an audit entry is accessible showing resolution type (LWW or merge), involved users, timestamps, and before/after values

Calm Controls

Per‑contact notification settings that respect quiet hours, allow topic filters (meds, therapy, vitals), and choose cadence (daily digest, weekly roll‑up, urgent‑only). Smart bundling prevents alert overload while ensuring critical items still break through. Families receive the right information at the right time, creating trust without constant pings.

Requirements

Per-Contact Notification Preferences
"As a family member, I want to customize how and when I receive updates for my loved one so that I only see what matters without constant interruptions."
Description

Provide a unified mobile-first and web settings experience to configure notification preferences at the individual contact level (e.g., each family member or caregiver) across topics, cadence, quiet hours, and delivery channels. Include secure backend APIs to read/write preferences, role-based access (family, caregiver, ops admin), sensible defaults, and bulk-apply templates at the agency level. Preferences must sync in real time across devices, respect user time zones, and be resilient offline with queued updates. Ensure accessibility (WCAG AA), localization, and a clear preview of what a recipient will receive given current settings.

Acceptance Criteria
Per-Contact Topic Filters and Delivery Channels
- Given a contact with available topics {meds, therapy, vitals} and channels {push, SMS, email}, when a user selects meds+vitals and channels push+email for that contact, then only meds and vitals events generate notifications via push and email; therapy is excluded; SMS is not used. - Given push is disabled or the device lacks a valid push token, when a notification is sent for that contact, then the system falls back to the next enabled channel in the user’s specified order and records the fallback in the delivery log. - Given preferences are saved, when GET /v1/contacts/{id}/notification-preferences is called, then the API returns the exact saved topics, channel order, cadence, quiet hours, and version/ETag with p95 latency ≤ 500 ms.
Quiet Hours with Urgent Breakthrough and Time-Zone Respect
- Given quiet hours set to 21:00–07:00 in the recipient’s time zone, when non-urgent events occur during quiet hours, then they are suppressed and included in the next digest/roll-up; no real-time delivery occurs. - Given an event marked urgent occurs during quiet hours, when it is generated, then it is delivered immediately via the highest-priority enabled channel and is not bundled or delayed. - Given a DST transition in the recipient’s locale, when quiet hours span the transition, then suppression applies to the correct local times without duplicate or missed delivery; preview reflects the adjusted times.
Notification Cadence Selection and Bundling Rules
- Given cadence = Daily Digest at 18:00 local, when multiple non-urgent events occur, then exactly one digest is delivered at 18:00 summarizing events since the prior digest; if no events occurred, no digest is sent. - Given cadence = Weekly Roll-up on Friday at 17:00 local, when non-urgent events occur during the week, then one roll-up is delivered at the scheduled time; if the week has no events, no roll-up is sent. - Given cadence = Urgent-Only, when non-urgent events occur, then no non-urgent notifications are delivered by any channel; only urgent events are delivered immediately. - Given a digest would include more than 50 items, when it is generated, then the digest groups by topic with counts and includes at most 50 items plus a link to view more in the app.
Real-Time Cross-Device Sync and Offline Queued Updates
- Given the same account is signed in on mobile and web, when a preference is changed on one device, then the other device reflects the change within 5 seconds without manual refresh. - Given the device is offline, when the user changes any preference, then the change is queued locally with version and timestamp, persists across app restarts, and syncs within 10 seconds after connectivity resumes; the final server state matches the user input. - Given concurrent edits from two devices to the same field, when the server detects a version conflict, then it responds with 409; the client fetches the latest, reapplies the user’s pending change, and resubmits so that the resolved state is deterministic (last-write-wins by version) and both devices converge within 5 seconds.
Role-Based Access Control for Reading/Writing Preferences
- Given role = Family Member, when accessing preferences via UI or API, then the user can read/write only their own contact’s preferences; attempts to access others are denied with 403 and no data leakage. - Given role = Caregiver, when accessing preferences, then the user can read/write preferences only for assigned clients; attempts outside assignment are denied with 403. - Given role = Ops Admin, when accessing preferences, then the user can read/write any contact within their agency and perform bulk operations; cross-agency access is denied with 403. - Given any preference write, when it succeeds, then an audit record is stored with actor, agency, contact, changed fields (before/after), timestamp, and source device; audit records are retrievable by Ops Admins; all requests use TLS 1.2+.
Agency Templates, Bulk Apply, and Sensible Defaults
- Given an Ops Admin creates or updates a template with topics, cadence, quiet hours, and channels, when the template is saved, then it passes validation, is versioned, and is selectable for use within 10 seconds. - Given an agency default template is set, when a new contact is created, then the contact inherits the template preferences; if no agency default exists, system defaults are applied: quiet hours 21:00–07:00 local, non-urgent daily digest at 18:00, urgent break-through enabled. - Given an Ops Admin bulk-applies a template to 100 contacts, when the job is submitted, then all targeted contacts are updated within 2 minutes; only fields defined in the template overwrite existing values; the job report includes per-contact success/failure with reasons.
Accessibility, Localization, and Preview Accuracy
- Given a user relying on keyboard and screen reader, when using the settings UI on mobile and web, then all interactive elements are reachable in logical order, have accessible names and ARIA roles/states, and meet WCAG 2.2 AA contrast and focus-visible criteria. - Given the app language is switched among English, Spanish, and French, when viewing settings and previews, then all labels, help text, dates/times, and number formats are localized; time zone names and quiet-hour ranges display in the selected language. - Given a specific contact’s current settings, when viewing the Preview, then it accurately shows which notifications will be delivered in the next 24 hours, the next digest/roll-up time, which items will be suppressed by quiet hours, and which would break through as urgent; pressing “Send Test” delivers a non-PHI test message via each selected channel within 60 seconds.
Quiet Hours with Critical Override
"As a caregiver, I want non-urgent notifications to pause during a family's quiet hours while critical alerts still break through so that I respect their preferences without risking safety."
Description

Enable per-contact quiet hours with flexible schedules (daily windows and exceptions), automatic time zone handling, temporary Do Not Disturb timers, and granular overrides for urgent/critical events. Non-urgent alerts are deferred to the next allowed window; critical alerts break through with distinct presentation and optional confirmation by the sender. Include escalation rules (e.g., if not acknowledged within X minutes, retry or switch channel), and audit all overrides for compliance. Provide a simple UI to test whether a sample alert would deliver now or be deferred.

Acceptance Criteria
Defer Non-Urgent Alerts During Quiet Hours (Per-Contact, Time-Zone Aware)
Given a contact has quiet hours configured as daily windows with optional weekday/date exceptions and an IANA time zone stored And the contact’s current local time falls within a quiet-hour window without an active exception When a non-urgent alert (urgency=normal) is generated for that contact Then the alert is not delivered immediately And the system computes the next allowed delivery time as the start of the next non-quiet window in the contact’s time zone And the alert is scheduled for delivery at that computed time and stored in UTC as scheduled_delivery_at And an audit entry is written with reason=quiet_hours_deferred including quiet_window, contact_time_zone, created_at, scheduled_delivery_at, and alert_id Given the same contact and the current local time falls within a defined exception window When a non-urgent alert is generated Then the alert is delivered immediately And an audit entry is written with reason=exception_allows_delivery including alert_id
Deliver Critical Alerts During Quiet Hours With Distinct Presentation
Given a contact has quiet hours and/or an active DND timer And the organization setting require_critical_confirmation is enabled When a sender marks an alert as critical and confirms the override Then the alert bypasses quiet hours/DND and is delivered immediately And the notification uses a distinct presentation (critical label, unique sound/vibration pattern, persistent until acknowledged) And the recipient must explicitly acknowledge; the acknowledgment timestamp is captured And an audit entry is written with reason=critical_override including confirmed_by, delivered_at, and presentation=critical Given require_critical_confirmation is disabled When a sender marks an alert as critical Then the alert bypasses quiet hours and is delivered immediately with the same distinct presentation And an audit entry is written with reason=critical_override including delivered_at Given a sender cancels the confirmation step When the alert is created Then it is treated as non-urgent and subject to quiet-hours deferral
Temporary Do Not Disturb Timer Behavior
Given a contact activates a temporary Do Not Disturb (DND) timer for a fixed duration or until a specific time When non-urgent alerts are generated while the DND timer is active Then those alerts are deferred until the earlier of (timer_end, next allowed delivery window) in the contact’s time zone And scheduled_delivery_at is recorded in UTC with reason=dnd_deferred in the audit log Given the DND timer is active When a critical alert is generated Then it is delivered immediately with the critical presentation And an audit entry records reason=critical_override_dnd Given the user cancels the DND timer before it expires When pending deferred alerts exist and the current time is within an allowed window Then pending alerts are released immediately Else pending alerts remain scheduled for the next allowed window
Escalation and Channel Switch on Unacknowledged Critical Alerts
Given a per-contact escalation policy exists with retry_interval_minutes=X, max_retries=N, and fallback_channel in {SMS, phone_call, email} And a critical alert is delivered via the primary channel When the alert is not acknowledged within X minutes Then the system retries delivery on the primary channel up to N times at X-minute intervals And if still unacknowledged after N retries, the system switches to the configured fallback_channel and sends the alert And escalation stops immediately upon recipient acknowledgement on any channel And each retry/switch writes an audit entry with attempt_number, channel, attempted_at, outcome, and next_action And no more than N+1 total sends are performed across channels unless explicitly configured to continue
Audit Trail for Quiet-Hour Deferrals, Overrides, and Escalations
Given an auditor queries the audit log by contact, date range, and event type When the query is executed Then results include immutable records for deferrals, overrides, deliveries, acknowledgements, retries, and channel switches And each record includes at minimum: alert_id, contact_id, event_type, reason, actor (system/user), channel, contact_time_zone, created_at, scheduled_delivery_at (if any), delivered_at (if any), acknowledged_at (if any), and metadata (e.g., quiet_window or escalation_policy) And results can be filtered by reason and exported in CSV and JSON formats And records are append-only and include a server-generated record hash for tamper evidence
Quiet-Hours Tester UI: Predict Delivery Outcome
Given a user opens the Quiet-Hours Tester for a specific contact When the user selects an alert topic, sets urgency, and chooses a test timestamp (default=now) Then the UI displays either "Deliver now" or "Deferred until <timestamp>" based on current rules And the UI shows a concise explanation including applied rules (quiet hours, exception, DND, critical override), the contact’s time zone, and the computed next window And changing any input (timestamp, urgency, topic) updates the result in under 500 ms And no test actions create real alerts; no audit entry is written unless the user explicitly copies or shares the result
Topic Filters and Event Taxonomy Mapping
"As an operations manager, I want to define which event types roll up under each topic so that families can filter updates in a way that aligns with our care workflows."
Description

Define and manage a versioned taxonomy that maps platform events (meds, therapy, vitals, schedule changes, compliance flags, messages) to user-facing topics. Allow agencies to enable/disable topics and customize mappings within guardrails. Surface topic filters per contact so recipients can opt in/out by topic while preserving mandatory compliance notifications. Provide an admin tool to validate mappings, a fallback classification for unmapped events, and analytics to show topic distribution and opt-in rates.

Acceptance Criteria
Publish Versioned Event-to-Topic Taxonomy
Given I am an agency admin with taxonomy permissions When I create a new taxonomy version with topics (unique keys, labels, mandatory flag) and map platform events [meds, therapy, vitals, schedule_changes, compliance_flags, messages] to topics Then the version is saved as Draft with a semantic version number And validation requires 100% of platform event types are mapped to at least one topic And compliance_flags are mapped to the Compliance topic and cannot be mapped exclusively elsewhere When I publish the Draft Then the version becomes Active and immutable And the prior Active version becomes Deprecated but remains queryable by version id And the classification API returns the Active version id with each classification result
Guardrailed Agency Overrides and Topic Enable/Disable
Given a Global Active taxonomy exists When an agency admin configures agency-level topic availability Then optional topics can be enabled/disabled per agency And mandatory topics cannot be disabled When the admin customizes event-to-topic mappings Then overrides are allowed only for non-mandatory topics and must keep every event mapped to at least one topic And attempts to remove the last topic from any event or to remap compliance_flags are rejected with actionable errors And an agency override summary (enabled topics, overridden mappings) is available via API and UI
Per-Contact Topic Filter Preferences with Mandatory Overrides
Given a recipient contact’s notification settings screen When the contact opts out of one or more non-mandatory topics Then they no longer receive notifications for events mapped solely to those topics And if an event maps to any mandatory topic (e.g., Compliance), the notification is delivered regardless of opt-out And mandatory topics appear locked/on in the UI and via API And preference changes propagate to routing within 2 minutes and are audit logged with user, timestamp, and diff
Fallback Classification for Unmapped Events
Given an Active taxonomy and agency overrides in place When a new or unexpected event subtype is received that has no mapping Then it is classified under the Fallback topic And a high-severity alert is raised to admins with event type, count, and affected agencies And the unmapped condition appears in validator results until a mapping is added And analytics attribute such events to the Fallback topic so they are visible in topic distribution
Admin Mapping Validator Blocks Invalid Publish
Given an admin opens the taxonomy validator for a Draft version When the validator runs Then it reports failures for unmapped events, orphan topics (no inbound mappings), illegal overrides of mandatory topics, and unknown topic references And it returns pass/fail plus counts by issue type And publishing the Draft is blocked when validation fails and allowed only when it passes And the validator report can be exported as CSV
Topic Distribution and Opt-in Analytics
Given analytics are requested for a date range and agency scope When querying by topic Then the system returns per-topic: total event count, % distribution, unique recipients notified, opt-in rate (non-mandatory topics), and 7/30-day trend deltas And metrics update within 15 minutes of new events and preference changes And mandatory topics are labeled as Mandatory and excluded from opt-in rate calculations And results can be filtered by topic and exported to CSV
Classification API Drives Notification Routing
Given an event payload arrives for notification routing When the classification API is called with the payload and agency id Then it returns the topic(s) per mapping using the Active taxonomy version effective at the event’s timestamp And it returns the effective recipient list after applying agency topic availability and per-contact preferences And events mapped to any mandatory topic include all applicable recipients regardless of opt-outs And the response includes taxonomy_version_id and a deterministic correlation_id for idempotency And the classification decision is audit logged with inputs, outputs, and timing
Cadence and Digest Generation
"As a family member, I want a daily or weekly digest of non-urgent updates so that I can stay informed without being pinged for every event."
Description

Implement a digest builder that aggregates non-urgent events per contact and topic into scheduled daily and weekly summaries. Support configurable send times, localized date/time formatting, and accessible, branded templates with deep links to full context. Include idempotent generation, retry policies, handling of missed windows (e.g., catch-up digests), and storage of rendered artifacts for re-send and audit. Summaries should include key highlights (e.g., vitals trends), counts, and clear reasons why items were included or excluded based on settings.

Acceptance Criteria
Daily Digest at Configured Local Time per Contact and Topic
Given a contact has daily digest cadence enabled with a configured send time of 18:00 in the contact’s time zone and locale And non-urgent events exist across topics (e.g., meds, therapy, vitals) within the previous 24-hour window When the scheduler reaches the configured send time (not earlier than 18:00 local) Then exactly one daily digest is generated for that contact covering the last complete 24-hour window And the digest excludes urgent events and groups included items by topic And the digest displays per-topic item counts and a total count And all dates/times in subject and body are formatted in the contact’s locale and time zone And the digest subject includes the local date (e.g., “Daily Summary — Sep 05, 2025”).
Weekly Roll-Up with Highlights and Trends
Given a contact has weekly roll-up cadence enabled for Monday at 09:00 local And non-urgent events and vitals readings exist for the last complete 7-day window When the scheduler triggers the weekly roll-up at the configured time Then the digest aggregates non-urgent events from the last complete 7-day window (Mon–Sun or locale-appropriate week) And includes per-topic counts and a weekly highlights section And vitals highlights include min/avg/max and trend direction over the window (e.g., improving/stable/declining) with a short label explaining the basis (e.g., “trend over last 7 days”).
Inclusion/Exclusion Logic with Topic Filters and Reasons
Given a contact’s notification settings enable topics {meds, therapy} and disable {vitals} And there exist both urgent and non-urgent events across all topics within the cadence window When the digest is generated Then only non-urgent events for enabled topics are included And urgent events are excluded from the digest And the digest contains an “Included Because” note per item (e.g., “matches topic: meds”) And the digest contains an “Excluded Summary” section with counts per reason (e.g., “filtered by topic: vitals”, “urgent: real-time channel”).
Idempotent Generation and Safe Retry Without Duplicates
Given a digest generation is initiated for the same contact and the same cadence window more than once (e.g., due to retry or re-run) When the job executes multiple times with the same idempotency key/window Then only one rendered digest artifact is persisted and marked as generated for that window And at most one outbound send is recorded for that window And if an initial send attempt fails transiently, the system retries per configured policy and ultimately sends exactly once upon success And if max retries are exhausted, the digest is marked failed with error details and no duplicate artifacts or sends are created.
Missed Window Catch-Up Digest
Given the system was unavailable at the configured send time for a contact’s digest window And the window has not yet been summarized When the scheduler resumes Then a catch-up digest is generated exactly once for the missed window And the digest is labeled as “Catch-up” and displays the covered start/end timestamps And items included in the catch-up are not re-included in the next regular digest for the same contact.
Accessible, Branded Template with Deep Links to Full Context
Given brand theming assets (logo, colors, footer) and accessibility standards are required When a digest is rendered Then the HTML and plaintext versions are produced using the brand theme And the HTML meets WCAG 2.1 AA basics: semantic headings, alt text for images, sufficient color contrast, keyboard-navigable focus order And each item and topic section includes a deep link that opens CarePulse to the contact and topic context for that item And if the recipient is not authenticated, following a deep link prompts sign-in and then routes to the intended context.
Storage of Rendered Artifacts for Re-send and Audit
Given a digest is generated and sent When the system persists the digest Then an immutable rendered artifact (HTML and plaintext) is stored with metadata: digest_id, contact_id, cadence type, window start/end, locale, topic filter snapshot, and checksum And the artifact is retrievable by digest_id for audit And triggering a re-send uses the stored artifact (not re-generated content) and records who re-sent and when And listing digests for a contact returns artifacts and statuses for a specified date range.
Smart Bundling and Throttling Engine
"As a family member, I want related updates grouped together and excessive pings reduced so that I feel informed but not overwhelmed."
Description

Create server-side logic to bundle related events within configurable time windows, de-duplicate near-identical alerts, and apply rate limits per contact and topic to prevent alert storms. Implement suppression rules (e.g., minimum spacing between similar alerts) and dynamic backoff during spikes, while ensuring urgency rules allow critical items to bypass throttling. Provide tunable parameters, A/B configuration support, observability metrics (bundle rate, suppression count, breakthrough rate), and transparent explanations attached to notifications indicating bundling decisions.

Acceptance Criteria
Bundle vitals events for a contact within a 10‑minute window
Given tenant ACME configures bundling_window_minutes=10 for topic=vitals and contact_id=123 And three vitals events e1,e2,e3 occur at T0, T0+2m, and T0+9m for contact_id=123 When the engine processes these events Then exactly 1 notification is delivered no later than T0+10m+30s containing references to e1,e2,e3 And the notification payload includes explanation.decision="bundled", explanation.bundled_count=3, explanation.window_minutes=10, and a non-empty explanation.bundle_id And an event e4 at T0+12m results in a new notification separate from the e1–e3 bundle And events for a different contact or different topic during the same window are not co-bundled
De‑duplicate near‑identical medication alerts within 15 minutes
Given dedup_similarity_threshold=0.90 and dedup_window_minutes=15 are configured for topic=meds for contact_id=123 And two meds events e1 and e2 for contact_id=123 have normalized content similarity ≥ 0.90 and e2 occurs within 15 minutes of e1 When the engine processes e1 and e2 Then only 1 notification is delivered for e1/e2, and metrics.deduped_count increases by 1 And the delivered notification explanation includes dedup_applied=true and explanation.deduped_event_ids contains e2 And if similarity < 0.90 or topic/contact differ, both notifications are delivered And an event e3 arriving more than 15 minutes after e1 is not deduped against e1
Per‑contact/topic rate limit and minimum spacing enforcement
Given rate_limit_per_60m=3 and min_spacing_minutes=10 are configured for topic=therapy for contact_id=123 And six distinct therapy events arrive at T0, T0+5m, T0+10m, T0+15m, T0+20m, T0+25m When the engine processes these events Then no more than 3 notifications are delivered between T0 and T0+60m And delivered notifications are spaced at least 10 minutes apart And metrics.suppression_count increases by 3 And the next delivered notification includes explanation.suppressed_since_last=3 and reason_codes contains ["rate_limit","min_spacing"]
Dynamic backoff activates during alert spike and auto‑recovers
Given baseline min_spacing_minutes=2, spike_threshold_events_per_minute=20, backoff_multiplier=2, and max_backoff_minutes=30 And 100 vitals events for contact_id=123 arrive within 3 minutes When the engine detects an event rate above the spike threshold Then the effective min_spacing increases progressively to 4, then 8, then 16 minutes while the rate remains above threshold, capping at 30 minutes And metrics.backoff_level reflects each increase and includes tenant/topic labels And when the event rate stays below threshold for 5 consecutive minutes, the effective min_spacing returns to baseline within 1 minute And total notifications during the spike do not exceed ceil(spike_duration_minutes / current_effective_min_spacing)
Urgent alerts bypass bundling and throttling
Given urgency_rule severity in ["urgent","critical"] is configured to bypass bundling, dedup, rate limits, and backoff And a non-urgent bundle window is open for contact_id=123 topic=vitals And an urgent vitals event eU occurs at TU When the engine processes eU Then a standalone notification for eU is delivered within 60 seconds regardless of open bundles or limits And the notification explanation includes breakthrough=true and reason_codes contains ["urgency_bypass"] And the existing non-urgent bundle remains open and unaffected
Per‑tenant tunables with deterministic A/B variants
Given tenant ACME defines variant A with bundling_window_minutes=10 and variant B with bundling_window_minutes=5 and split=50/50 using a stable hash of contact_id for assignment And a sample set of 200 contacts exists for ACME When variant assignment is computed twice for the same set Then each contact remains in the same variant across runs And 50% ±10% of contacts are assigned to each variant And updating ACME’s tunables via the config API takes effect for new events within 2 minutes without service restarts And metrics are tagged with tenant_id and variant label
Metrics emitted and explanations attached to notifications
Given the engine processes 40 events for tenant ACME resulting in 3 bundled notifications (total 10 events), 25 standalone notifications, 5 suppressions, and 2 urgent breakthroughs When the metrics endpoint is scraped Then bundle_rate=3/40, suppression_count=5, and breakthrough_rate=2/40 are exposed per tenant and per topic in Prometheus format within 30 seconds And each delivered notification payload includes an explanation object with fields: decision in ["bundled","standalone"], bundled_count (integer), suppressed_since_last (integer), rate_limit_applied (boolean), dedup_applied (boolean), backoff_level (integer), reason_codes (array of strings), and variant (string or null) And explanations contain no protected health information beyond opaque identifiers and timestamps
Multi-Channel Delivery and Fallback Routing
"As an operations manager, I want reliable delivery with channel fallback for critical alerts so that families receive important information even if one channel is unavailable."
Description

Support push, SMS, and email delivery per contact with selectable primary and fallback channels. Implement delivery and read receipts where available, timely failover (e.g., if push not delivered in X minutes, send SMS), and multi-channel escalation for critical alerts. Ensure compliance for messaging (opt-in/opt-out flows, STOP/HELP for SMS, verified sender domains, 10DLC registration) and cost controls (rate limiting, batching). Provide deep links to the app, device-level notification categories, and graceful degradation for recipients without the app installed.

Acceptance Criteria
Per-Contact Primary and Fallback Channel Selection
Given a contact has primary=Push, fallback=SMS, and failover_window=5 minutes And the contact has a valid push token When a standard (non-critical) alert is issued at 10:00:00 local time Then the system sends a Push within 2 seconds and records the attempt And if a Push delivery receipt is confirmed before 10:05:00, the system does not send SMS And if no Push delivery receipt by 10:05:00 or the token is invalid, the system sends exactly one SMS linked to the same notification And the contact receives no duplicate content across channels for this alert
Delivery and Read Receipt Capture across Channels
Given Push, SMS, and Email providers are configured When notifications are sent Then for Push, record delivered on provider ack and read when the app opens via deep link or marks the message viewed; store timestamps And for SMS, record delivered when a DLR "delivered" event is received; read status is recorded as "unknown"; store timestamps And for Email, record delivered on provider "delivered" event; read on open-pixel event if permitted, else record "unknown"; store timestamps And if no provider callback is received within 60 minutes, mark receipt status as "no receipt" without blocking failover logic And surface delivery/read statuses in an audit log within 2 minutes of receipt
Critical Alert Multi-Channel Escalation until Acknowledged
Given a contact has primary=Push and fallbacks=[SMS, Email] with escalation_interval=2 minutes And the alert is marked Critical When the alert is triggered at 12:00:00 Then send Push immediately And if not acknowledged by 12:02:00, send SMS And if still not acknowledged by 12:04:00, send Email And cease further sends immediately upon any acknowledgement from any channel And cap total attempts to 3 channels within a 15-minute maximum window And record the escalation timeline and the acknowledging channel
SMS Compliance: Opt-In/Opt-Out and 10DLC
Given a contact has not opted in to SMS When the system attempts to send any SMS Then block the send, record the reason, and prompt for the opt-in flow Given a contact replies STOP When any future SMS is queued Then suppress the SMS, send a one-time opt-out confirmation, and record an audit entry with timestamp and message ID Given a contact replies HELP When received Then send the configured HELP response including brand, support contact, and opt-out instructions And only send SMS via an approved 10DLC campaign; if campaign status is not active, block sends and alert admins
Email Compliance: Verified Sender Domains and Bounce Handling
Given a tenant configures a sender domain When SPF and DKIM are not verified or DMARC alignment fails Then block outbound email for that domain and log a configuration error Given an email is sent When a hard bounce or complaint is received Then immediately suppress further email to that address, record the event, and surface it in compliance reports within 5 minutes
Cost Controls via Rate Limiting and Batching
Given tenant rate limits are set to SMS<=60/minute, Email<=300/minute, and Push<=1000/minute When outbound volume exceeds the limit Then queue non-critical messages and release within limits while allowing Critical messages to use a separate priority lane up to 10/minute Given multiple non-critical events for the same contact accrue during a digest window When the daily digest job runs at the configured time Then batch them into a single message per channel with a count and summary, and do not send the individual messages And record per-tenant daily cost estimates with a variance of +/-5%
Deep Links, Notification Categories, and App-Not-Installed Degradation
Given a notification includes a deep link to Visit Details When the recipient taps the deep link on a device with the app installed Then the app opens directly to the Visit Details screen with the referenced visit ID Given the app is not installed When the recipient taps the link Then route to a mobile web view of Visit Details or to the app store with a web fallback, ensuring no dead ends Given device-level notification category "Vitals" is disabled by the user When a "Vitals" notification is generated Then do not deliver Push for that category and immediately use the configured fallback channel; Critical alerts in other categories remain unaffected
Audit Logging and Consent Management
"As a compliance officer, I want traceable records of notification preferences and deliveries so that we can prove adherence to regulations during audits."
Description

Maintain immutable audit logs for preference changes, deliveries, suppressions, overrides, and breakthrough events with timestamps, actor identity, and reason codes. Capture and store explicit consent for each delivery channel, manage expirations, and support per-jurisdiction retention policies. Provide exportable, audit-ready reports and an admin query UI with filters (contact, patient, timeframe, topic, outcome). Ensure data is encrypted at rest and in transit, with least-privilege access and alerting on anomalous access patterns.

Acceptance Criteria

Auto Rebind

Instantly re-establishes lost sensor connections in the background, rotating through known channels and cached keys. If a device truly drops, it prompts a single-tap rebind with prefilled client pairing. Readings continue streaming and are backfilled, while a clean audit trail records the fix—preventing gaps in visit notes and keeping EVV aligned.

Requirements

Background Reconnection Orchestrator
"As a caregiver, I want the app to automatically reconnect my patient’s sensor within seconds so that I can continue care and documentation without stopping to troubleshoot."
Description

A background service that detects sensor disconnects and automatically re-establishes connections without user intervention. It cycles through known transport channels (e.g., BLE, Wi‑Fi, hub/LTE) using cached credentials, applies exponential backoff with jitter, and respects mobile OS constraints (iOS CoreBluetooth state restoration/background tasks; Android Foreground Service). It enforces battery and CPU budgets, aborts when a visit ends or a device is intentionally unbound, and emits structured events for UI and server-side listeners. The orchestrator degrades gracefully: if reconnection fails within a configurable threshold, it escalates to a one-tap rebind prompt. Telemetry and feature flags allow tuning of retry strategies per device model and firmware.

Acceptance Criteria
Auto‑Reconnect Across Multi‑Transport Channels
Given an active visit and a paired sensor disconnects unexpectedly When the orchestrator begins reconnection Then it cycles through configured transports in priority order [BLE, Wi‑Fi, Hub/LTE] using cached credentials without user interaction And it applies exponential backoff with jitter between attempts (initial delay 1–2s, max backoff ≤ 60s, jitter ±20%) And it records each attempt with timestamp, transport, outcome, and attempt_number And it re-establishes a connection within the configurable reconnection window (default 60s) And on success it resumes data streaming within 2s of link establishment
Background Execution Compliance on iOS and Android
Given the app is backgrounded and a BLE-connected sensor drops during an active visit When the orchestrator handles the disconnect Then on iOS it initiates CoreBluetooth state restoration to scan/connect without foregrounding and starts a reconnect attempt within 5s of the drop And the app is not terminated by the OS for background violations (no crash/termination logs) And on Android it runs as a Foreground Service with a visible notification and starts a reconnect attempt within 5s of the drop And no background execution limit violations or ANRs are observed during the process
Battery/CPU Budget Enforcement During Reconnect
Given the orchestrator is retrying reconnection for at least 5 minutes during an active visit When measuring device resource usage Then average CPU utilization attributable to the service is ≤ 5% and no spikes > 15% persist for longer than 1s And estimated battery drain attributable to the service is ≤ 2% per hour during retry (per OS energy metrics) And BLE/Wi‑Fi scans adhere to configured duty cycles (e.g., BLE scan window ≤ 5s per 30s while backgrounded) And when battery < 15% or Low Power Mode is active, the orchestrator throttles attempts (extends backoff) and emits a 'budget_throttle' event
Abort Reconnect on Visit End or Intentional Unbind
Given the orchestrator is actively attempting reconnection When a 'visit_end' event is received or the user intentionally unbinds the device Then all pending and scheduled reconnect attempts are canceled within 2s And no further connection attempts occur until a new visit starts and the device is rebound And a 'reconnect_aborted' event including reason [visit_end|unbind], visit_id, device_id, and timestamp is emitted locally and delivered to the server within 10s
Escalate to One‑Tap Rebind After Threshold
Given a paired sensor has failed to reconnect during an active visit When total retries exceed the configurable max_attempts (default 5) or elapsed time exceeds max_window (default 60s) Then the orchestrator suspends background retries And presents a one‑tap rebind prompt with prefilled client and device context And a single tap initiates pairing and, on success, streaming resumes within 3s And the audit trail records failed attempts, escalation trigger, user action, and final outcome
Data Continuity and Backfill After Reconnect
Given readings were streaming prior to a disconnect with monotonic sequence numbers When reconnection succeeds Then missing readings are backfilled from device or local buffer in order with 0 duplicates and 0 gaps as verified by sequence numbers And the server receives all backfilled readings within 30s for gaps shorter than 5 minutes And EVV visit timelines remain contiguous with a 'temporary_disconnect' marker recorded in the audit log
Telemetry and Feature‑Flagged Retry Strategy by Device Model
Given remote feature flags configure retry parameters per device model and firmware When the orchestrator initializes or updates its retry strategy Then it fetches and applies the latest configuration within 5 minutes (cache TTL) without requiring an app restart And all reconnect events include config_id, device_model, firmware_version, transport, attempt_number, backoff_ms, outcome, and battery_state And toggling a flag changes the next attempt's behavior (e.g., transport order or backoff) and the change is reflected in subsequent event payloads And if remote config is unavailable, the orchestrator falls back to safe defaults and emits a 'config_fallback' event
Secure Key Cache & Channel Rotation
"As a security-conscious admin, I want sensor keys stored securely and rotated automatically so that reconnections are fast without compromising PHI."
Description

Encrypted, on-device storage of sensor pairing credentials and channel preferences that enables instant rebinds without re-entering secrets. Uses OS keystores (iOS Keychain/Android Keystore) with hardware-backed protection, per-client scoping, and key TTL/rotation policies. On reconnect attempts, the module rotates through known channels and performs a lightweight re-auth handshake; it handles key invalidation, revocation on unbind, and migration across app updates. Supports multiple sensors per client, conflict resolution, and ensures keys never leave the device, aligning with HIPAA/SOC2 practices.

Acceptance Criteria
Hardware-Backed Key Storage and Retrieval
Given a first-time sensor pairing for a specific client, When the app saves the pairing credentials, Then the credentials are stored using the OS keystore (iOS Keychain / Android Keystore) with hardware-backed protection where available and an access-control mode of "after first unlock". And Then the stored item is referenced by alias/handle only and raw key material is not readable by application code or persisted to disk. And Then on subsequent reconnects, the module retrieves the key handle and completes rebind without user input within 2 seconds under normal radio conditions. And Then creation and retrieval events are recorded in the audit log without exposing secrets or raw key bytes.
Per-Client Scoping and Multi-Sensor Conflict Resolution
Given two clients A and B with their own sensors and cached keys, When a reconnect is attempted for client A, Then keys scoped to client B MUST NOT be usable and the operation fails closed without cross-client access. And Given a client with multiple sensors, When simultaneous reconnects occur, Then each sensor uses its own scoped key/alias and connections do not interfere. And When two sensors prefer the same channel, Then the module schedules attempts to avoid contention, selecting last-known-successful channel first and staggering subsequent attempts so that no sensor waits more than 3 seconds before its first attempt. And Then channel selection and any contention decisions are logged with sensor IDs and chosen priority (no secrets).
Key TTL and Rotation Policy Enforcement
Given a stored key with a defined TTL, When the remaining lifetime is less than the configured rotation threshold (e.g., 24h), Then the next successful handshake performs key rotation, creating version N+1 and marking version N as pending-deletion. And Then if the rotation succeeds, version N is deleted from the keystore within 1 second and the audit log records old/new versions and timestamps (no key material). And If rotation fails, Then the module retries with exponential backoff (initial 2s, max 60s) and continues using version N until N+1 is active, ensuring no data gap > 30 seconds due to rotation attempts. And If the key is past TTL at handshake start, Then rotation is mandatory before data streaming resumes.
Channel Rotation and Lightweight Re-Auth Handshake
Given a lost sensor connection with valid cached credentials, When a reconnect is initiated, Then the module cycles through all known channels for that sensor with a per-channel attempt timeout of 2 seconds and an overall ceiling of 10 seconds per cycle. And Then the lightweight re-auth handshake completes without user input in < 500 ms after transport connection and resumes streaming immediately upon success. And If two full rotation cycles (max 20 seconds total) fail, Then a single-tap rebind prompt is shown with client and sensor prefilled; accepting it generates a fresh key and attempts bind within 5 seconds. And Upon successful reconnect or rebind, Then any missed readings are backfilled from the sensor where supported and the audit trail records channel used, attempt counts, and outcome.
Key Invalidation and Revocation on Unbind or Compromise
Given a user unbinds a sensor from a client, When the action is confirmed, Then all associated key material and aliases for that client-sensor pair are revoked and deleted from the keystore within 500 ms. And Then any subsequent connection attempts using the revoked key fail with a specific "key_revoked" error without retrying. And Given the device reports an authentication failure indicating invalid/compromised key, When the module detects this, Then it marks the cached key invalid, attempts a single rotation to N+1, and if that fails, surfaces the single-tap rebind prompt. And All revocation and invalidation events are recorded in the audit log with actor, timestamps, client, and sensor identifiers (no secrets).
App Update, OS Change, and Migration Resilience
Given the app is updated to a new version, When the user launches post-update, Then existing keystore aliases are discovered/migrated without requiring re-pairing and the first reconnect completes within 10 seconds. And If the OS keystore changes its backend (e.g., hardware-backed availability toggles), Then the module migrates keys to the new provider or marks non-migratable keys and prompts single-tap rebind, never exporting key material off-device. And If the app is rolled back within 24 hours, Then previously created aliases remain usable and reconnect succeeds without re-pairing. And All migrations and fallbacks are captured in the audit trail with outcomes, without logging secrets or raw key bytes.
On-Device Only and Compliance Logging
Given normal operation and error conditions, When observing network traffic and application logs, Then no raw key material or derivable secrets are transmitted off-device; only opaque identifiers or key aliases may appear. And Then an immutable audit log captures key lifecycle events (create, rotate, revoke, migrate) with timestamp, user/device IDs, client/sensor IDs, and result codes, retaining entries per policy and meeting HIPAA/SOC 2 expectations. And Given the device is locked, When background reconnect occurs, Then keys are accessible under the "after first unlock" policy and reconnect proceeds without prompting the user, while still preventing access before first unlock after reboot.
One-Tap Rebind Prompt
"As a caregiver, I want a single-tap way to rebind the correct sensor to my current client so that I can resume capturing data immediately."
Description

A minimal-friction UI that appears only when automatic reconnection exceeds a time threshold or definitively fails. It preselects the active visit’s client and most likely sensor, validates proximity/identity, and completes rebind with a single tap. Guardrails prevent cross-client pairing, enforce role-based access, and work offline-first with queued confirmations. Includes accessibility compliance, localization, clear progress/error states, and deep links from notifications. Prompts are rate-limited to avoid alert fatigue and dismiss automatically on successful background rebind.

Acceptance Criteria
Prompt Trigger and Auto-Dismiss Logic
- Given an active visit with a previously bound sensor, When auto-reconnection attempts exceed the configured time threshold or a definitive failure is detected, Then the One-Tap Rebind prompt is displayed within 1 second. - Given a transient drop shorter than the configured threshold, When the sensor reconnects in the background, Then no prompt is shown. - Given the prompt is visible, When background rebind completes successfully before user action, Then the prompt auto-dismisses without sound/vibration and no notification is generated. - Given no active visit is in progress, When a sensor disconnects, Then no One-Tap Rebind prompt is shown.
Prefilled Client and Sensor Selection
- Given an active visit, When the prompt is displayed, Then the client is preselected to the active visit's client. - Given multiple known sensors for the client, When the prompt is displayed, Then the most likely sensor is preselected based on last-bound device ID and strongest recent RSSI within the last 2 minutes. - Given no eligible sensor meets the selection rules, When the prompt is displayed, Then the one-tap action is disabled and guidance to move closer or power on the sensor is shown.
Proximity and Identity Validation
- Given the preselected sensor is detected, When its broadcast identifier matches the cached pairing key and the measured RSSI is within the configured proximity threshold, Then the one-tap Rebind action is enabled. - Given the identifier does not match or proximity threshold is not met within 5 seconds, Then the one-tap action remains disabled and an error with corrective guidance is shown. - Given a different client's sensor is in range, When validation runs, Then the action is blocked and a message states cross-client pairing is not permitted.
One-Tap Rebind Completion, Backfill, and Audit Log
- Given the one-tap action is enabled, When the user taps Rebind, Then the rebind completes within 5 seconds at the 95th percentile on supported devices and OS versions. - Given rebind completes, Then live readings resume within 2 seconds and data collected during the gap is backfilled from the sensor/cache where available. - Then EVV event records remain contiguous with no missing segments; a 'gap repaired' marker is included for audit. - Then an audit log entry is created containing userId, clientId, visitId, sensorId, timestamps (disconnect, attempt, success/failure), reason, method (auto/prompt), retry count, connectivity state, and outcome. - Given rebind fails, Then an actionable error is shown with a retry option and diagnostics link, and a failed audit entry is recorded.
Guardrails: Cross-Client Pairing and Role-Based Access
- Given the active visit's client differs from the last verified pairing of the detected sensor, When the user attempts to rebind, Then the action is blocked and no pairing occurs. - Given the user lacks Sensor.Rebind permission, When the prompt is displayed, Then the one-tap action is hidden or disabled and an explanatory message is shown. - Given any blocked attempt, Then a security/audit log is recorded including userId, timestamp, reason, and client/sensor identifiers (hashed where required).
Offline-First Queue and Sync
- Given the device is offline, When the user taps Rebind, Then the app performs a local rebind and queues a confirmation event for the server. - Given connectivity is restored, Then the queued confirmation is transmitted within 60 seconds and server state reflects the binding; the UI updates from Pending to Confirmed without user action. - Given the queue cannot sync for 30 minutes, Then the user is notified in-app with a persistent banner and an option to retry sync; no duplicate rebind prompts are shown for the same sensor during this period. - Given a server conflict is detected on sync, Then the client resolves via last-write-wins with a causalityId; on overwrite, the UI displays a brief notice and the audit trail records the resolution.
Notifications, Rate Limiting, Deep Links, Accessibility, and Localization
- Given repeated reconnection failures, Then no more than 1 prompt per sensor every 3 minutes and no more than 3 prompts per user per hour are shown; excess attempts continue background retries without UI. - Given the app is backgrounded and auto reconnection fails, When a notification is delivered and tapped, Then the app opens directly to the prompt with prefilled clientId and sensorId; if they do not match the active visit, the action remains disabled and guardrail messaging is shown. - Then all interactive elements meet WCAG 2.1 AA: accessible labels, logical focus order, minimum 4.5:1 contrast, and dynamic text up to 200% without truncating critical information. - Given the device locale is supported (en-US, es-US, fr-CA), Then all prompt strings, dates, and numbers are localized; unsupported locales fall back to en-US. - During rebind, a progress indicator shows stages (Connecting, Validating, Backfilling, Done) with timeouts and plain-language error messages that include a short error code.
Stream Continuity & Backfill
"As a compliance officer, I want any missed readings to be backfilled with clear provenance so that visit notes remain complete and audit-ready."
Description

Local buffering and resumable upload of sensor readings to prevent data gaps in visit notes. During disconnects, readings are timestamped, sequence-numbered, and encrypted at rest; upon reconnection, the client performs ordered, idempotent backfill with de-duplication and gap detection. Clock skew correction, provenance tagging, and partial-gap flags ensure clinical context is preserved. Storage quotas and eviction policies protect device resources, while backpressure management maintains app responsiveness. The ingestion API supports resume tokens and validates ordering to guarantee continuity end-to-end.

Acceptance Criteria
Offline Buffering & Encryption During Disconnect
Given a live sensor stream and a sudden network disconnect, When new readings are produced, Then each reading is buffered locally with an ISO-8601 UTC timestamp (ms precision) and a monotonically increasing per-stream sequence number. Given readings are buffered locally, When persisted to storage, Then payloads are encrypted at rest with keys protected by the OS keystore and cannot be read without app process authorization. Given buffering continues during a disconnect, When the configured local storage quota has not been reached, Then no readings are dropped and no UI thread blocking occurs. Given the buffer reaches the configured quota, When additional readings arrive, Then the eviction policy removes the oldest unuploaded records, records an eviction event, and continues accepting new readings without crashing or data corruption.
Ordered, Idempotent Backfill on Reconnect
Given connectivity is restored, When the client requests a resume token from the ingestion API, Then the API returns a token and the last acknowledged sequence number for the stream. Given a valid resume token, When the client uploads buffered readings, Then it sends them strictly in sequence order starting at last_ack+1 and marks them as backfill in metadata. Given a reading is retransmitted, When it is received by the server, Then the server performs idempotent de-duplication and persists only one copy without altering ordering. Given the server detects out-of-order or missing prior sequences, When a segment is uploaded, Then the server responds with the last accepted sequence and an error code, and the client re-requests/resends from the correct point without user intervention.
Duplicate Suppression and Gap Detection
Given duplicate readings exist in the client buffer, When backfill occurs, Then the server stores a single canonical record per sequence and returns idempotent success for duplicates. Given a missing sequence range occurs due to eviction or corruption, When backfill completes, Then a partial-gap flag is created with start/end timestamps, missing-count, and cause, and is linked to the visit timeline and export APIs. Given a partial-gap flag exists, When the visit note is generated, Then the gap is visibly indicated to the user and included in the audit-ready report with counts and time bounds.
Clock Skew Correction and Timestamp Normalization
Given device clock skew of up to ±5 minutes relative to server time, When reconnect occurs, Then the client computes an offset using server round-trip and applies correction to buffered readings before upload. Given corrected timestamps are applied, When readings are persisted, Then each record includes both original_capture_time and corrected_time, plus the applied skew in milliseconds. Given corrected timestamps and sequence numbers, When the server validates ordering, Then ordering is determined by sequence and corrected_time, and median corrected_time error relative to server receive time is ≤100 ms under stable connectivity.
Provenance Tagging in Streams, Notes, and Exports
Given readings are captured live or during backfill, When they are stored, Then each record includes provenance tags: device_id, sensor_id, stream_id, acquisition_mode (live|backfill), connection_epoch_id, and resume_token_id. Given provenance tags are present, When the visit note and data export are generated, Then backfilled readings are labeled as backfilled and include their upload_time separate from capture_time. Given a compliance review, When auditors query the API, Then provenance fields are available and filterable to isolate backfilled vs live data for a given visit.
Storage Quotas, Eviction Policy, and Backpressure
Given a configurable local storage quota, When sustained disconnect causes rapid buffering, Then the app enforces the quota, emits a buffer_near_capacity metric at ≥80% utilization, and evicts oldest unuploaded entries when at 100%. Given high buffering and upload catch-up, When the user navigates the app, Then UI remains responsive with no ANRs and 95th percentile frame time ≤24 ms on a mid-tier reference device. Given network congestion during backfill, When uploader pressure increases, Then backpressure reduces ingestion rate without blocking the sensor acquisition thread and without exceeding 20% over baseline memory usage.
Audit Trail and EVV Continuity
Given a disconnect and subsequent reconnect/backfill, When the process completes, Then an audit trail contains events for disconnect, reconnect, resume_token_issued, backfill_started, backfill_completed, evictions (if any), gaps_detected, and gaps_unresolved, all linked to the visit. Given the audit trail is generated, When a one-click audit-ready report is produced, Then it includes counts of readings captured live vs backfilled, duration of disconnect, and any gaps with reasons. Given EVV visit start/end are recorded, When backfilled readings are applied, Then EVV timestamps remain within the original visit window ordering (no overlaps or regressions) and pass EVV continuity validation rules.
EVV Alignment Guardrails
"As an operations manager, I want EVV to stay accurate during reconnects so that visits are compliant and claims are not rejected."
Description

Logic that keeps Electronic Visit Verification timestamps accurate across disconnects and rebinds. It reconciles sensor timelines with EVV check-in/out, marks and auto-resolves brief drops, and raises exceptions when gaps exceed jurisdictional thresholds. Generates EVV-consistent events, aligns with scheduling windows, and integrates with external EVV providers via API/webhooks. Caregiver-facing hints surface only when action is required, minimizing disruption while preventing claim rejections and penalties.

Acceptance Criteria
Auto-resolve Brief Sensor Drops (< jurisdiction threshold)
Given an active visit with EVV check-in recorded and a sensor feed drop lasting <= configured brief_drop_threshold When connectivity resumes via background auto-rebind Then EVV check-in/out timestamps remain unchanged And the gap is marked "auto-resolved" in the audit trail with start/end and duration And no caregiver prompt is displayed And visit note vitals are backfilled to cover the gap with source="sensor_backfill"
Raise Exception for Extended Gaps (> jurisdiction threshold)
Given an active visit with EVV check-in recorded and a sensor disconnect exceeding configured jurisdiction_gap_threshold When the threshold is crossed or the visit ends (whichever occurs first) Then create an EVV exception of type="ExtendedGap" with exact duration in seconds and status="Open" And block EVV submission until the exception is resolved or a required reason code is provided And present a single actionable hint to the caregiver to rebind or add a reason code And log the hint interaction and resolution outcome in the audit trail And raise the exception within 5 seconds of threshold breach
EVV Timestamp Reconciliation across Rebind
Given late-arriving or skewed sensor backfill during an active or recently closed visit When reconciling sensor timelines with EVV check-in/out Then do not shift EVV check-in/out timestamps And adjust only observational segments while preserving monotonic event order with no overlaps And if backfilled start precedes scheduled window by > policy_early_start_tolerance, create an "OutOfWindow" exception And if backfilled end exceeds scheduled window by > policy_late_end_tolerance, create an "OutOfWindow" exception
External EVV Provider Sync (API/Webhook) Idempotent Delivery
Given EVV-consistent events (CheckIn, CheckOut, ConnectivityRestored, ExceptionRaised, ExceptionResolved) are generated When posting to the external EVV provider API/webhook Then include an idempotency key per event and ensure retries do not create duplicates And validate payload against provider schema before send; reject locally with error details if invalid And retry failures with exponential backoff up to max_retries, then create an integration exception And store provider acknowledgment with timestamp and set event delivery status="Delivered" And achieve P95 delivery latency <= 60 seconds from event creation to provider acknowledgment
Caregiver Hints Only When Action Required
Given background auto-rebind is attempting recovery during an active visit When drop duration <= brief_drop_threshold Then do not display any banner/toast/notification to the caregiver When user action is required (e.g., pairing key expired, device changed, extended gap) Then display a single in-context hint with primary action ("Rebind" or "Add Reason Code") And limit to at most one hint per 10 minutes per visit unless state changes And dismissing or completing the action clears the hint and updates exception status appropriately And UI hint meets accessibility contrast and is keyboard/screen-reader reachable
Complete Audit Trail for EVV Alignment Events
Given any connectivity drop, auto-rebind attempt, reconciliation decision, event generation, or exception lifecycle change Then record an immutable audit entry with timestamp (UTC), visit_id, caregiver_id, device_id, event_type, start/end times, duration (if applicable), prior_state, new_state, reason_code (if any), and actor (system/user) And allow querying audit entries by visit_id within 200 ms P95 for the last 30 days And include linkage to external EVV delivery status (Queued, Delivered, Failed) with provider response codes And make the audit exportable in one click to an EVV compliance report without missing required fields
Audit Trail & Rebind Logging
"As a QA lead, I want a clear, immutable trail of reconnection events so that I can verify issues were resolved without data or compliance gaps."
Description

An immutable, append-only log of connection states and rebind activities tied to the visit, client, caregiver, and device. Each entry captures timestamp, device fingerprint, channel used, reason codes, outcome, duration, and data continuity status, with secrets redacted. Logs are tamper-evident (hash-chained), retained per policy, exportable in one-click audit reports, and queryable in the admin console. Correlation IDs link app telemetry, server ingestion, and EVV events to provide a clean compliance narrative.

Acceptance Criteria
Append-Only Entry with Required Fields on Rebind
Given a caregiver is on an active visit with a paired device And the device experiences a connection drop and auto-rebind attempts occur When any connection state change or rebind attempt completes (success or failure) Then the system appends a new audit log entry tied to the visit, client, caregiver, and device And the entry includes: timestamp (UTC ISO 8601 with milliseconds), device fingerprint, channel used, reason code, outcome, duration (ms), data continuity status, correlation_id, and event_id And secrets (keys, tokens, credentials) are redacted or omitted And the entry is immutable after write; any update/delete attempt is rejected and no prior entries are modified And the new entry is queryable in the admin console and visible in app telemetry views within 5 seconds of the event
Hash-Chained Tamper Evidence
Given an audit trail exists for a visit with one or more entries When a new entry is written Then its stored hash includes the previous entry's hash and the current entry payload to form a chained sequence And chain verification from first to last entry returns PASS for an unmodified trail And if any prior entry is altered, chain verification returns FAIL and identifies the first broken index And the chain verification status is included in audit exports and visible in the admin console
End-to-End Correlation IDs
Given a connection drop and subsequent rebind occur during a visit When app telemetry, server ingestion, and EVV events are recorded Then all related records include a shared correlation_id and unique event_ids And the correlation_id is present on each audit trail entry and in the one-click export And querying by correlation_id returns all associated audit entries within the selected date range
Retention Policy Enforcement
Given a retention policy is configured to R years in admin settings When audit entries are created Then entries remain queryable and exportable until their retention expiry timestamp And attempts to delete or modify entries before expiry are blocked and audited as denied And after expiry, entries are purged within 24 hours and no longer appear in queries or exports And a retention purge event is recorded in the system audit log with the purge job id and counts (no secrets)
One-Click Audit Report Export
Given an admin views a visit in the console When they click "Export Audit Report" Then a downloadable report is generated within 10 seconds And the report contains all relevant fields per entry: timestamp, device fingerprint, channel, reason code, outcome, duration, data continuity status, correlation_id, event_id, and hash values And entries are ordered by timestamp ascending and include an overall chain integrity status (PASS/FAIL) and first broken index if FAIL And secrets remain redacted in the export And the export is available in both PDF and CSV formats
Admin Console Query & Filter
Given an admin opens the Audit Trail view When they filter by any combination of visit id, client id, caregiver id, device fingerprint, date range, channel, reason code, outcome, and correlation_id Then matching results are returned within 2 seconds for result sets up to 10,000 entries And results can be sorted by timestamp (asc/desc) and are paginated And exporting the current result set produces the same entries and ordering as displayed
Accurate Data Continuity Status
Given sensor readings drop for a defined interval during a visit and are later partially or fully backfilled When the related audit entries are created post-rebind Then the data continuity status is computed as one of {continuous, backfilled, gap} And any gap/backfill durations and their start/end timestamps are recorded precisely And EVV check-in/out events are linked via correlation_id and visible alongside the audit trail in the export And EVV timestamps are not altered by data continuity status

Battery Scout

Predicts battery depletion days in advance using usage patterns and last-seen voltages. Flags high-risk devices on the day’s schedule, suggests battery swaps, and adds spare reminders to caregiver prep. Prevents mid-visit sensor shutdowns and the missed vitals that trigger rework or denials.

Requirements

Unified Battery Telemetry Ingestion
"As an operations manager, I want all device battery and usage data to sync reliably into CarePulse so that battery predictions reflect the latest real-world conditions and I can act before failures occur."
Description

Implement ingestion and normalization of battery-related telemetry from IoT sensors and mobile devices, including voltage readings, last-seen timestamps, transmission frequency, and usage events. Support vendor APIs, webhooks, MQTT/BLE gateways, and manual entry fallbacks. Normalize units and sampling rates, de-duplicate records, handle clock skew, and map each reading to the correct device, patient, and upcoming visit. Provide offline caching on mobile with retry, error handling with backoff, and health metrics for data freshness. Ensure data is HIPAA-safe, encrypted in transit/at rest, and constrained to least-necessary fields for prediction.

Acceptance Criteria
Multi-Channel Ingestion (APIs, Webhooks, MQTT/BLE, Manual)
Given a registered device and valid credentials, When telemetry is received via vendor REST API, Then the request is authenticated and a 2xx acknowledgment is returned and the payload is persisted to the raw ingestion queue within 200 ms. Given a signed webhook payload, When the HMAC signature is verified against the shared secret, Then the payload is accepted; When verification fails, Then a 401 is returned and nothing is persisted. Given an MQTT message on the authorized topic, When the client presents a valid client certificate, Then the message is accepted, parsed to canonical schema, and queued; otherwise it is rejected with reason=AUTH_FAILED. Given the vendor API responds with 429 or 5xx, When polling, Then the client honors Retry-After (if present) and uses exponential backoff (1s doubling to max 5m) with jitter and resumes without data loss. Given a caregiver submits a manual battery percentage on mobile, When value is between 0 and 100 inclusive, Then the record is created with source=manual and validation_status=valid; When outside range, Then the submission is blocked with inline error. Given any accepted source, When normalized, Then a canonical record contains fields {record_id, device_id, metric_type, value, unit, measured_at_original, source} and is enqueued for processing.
Normalization of Units and Sampling Rates
Given a voltage reading of 2950 mV, When normalized, Then it is stored as value=2.95 and unit=V with precision to 2 decimal places. Given a battery_fraction=0.73, When normalized, Then it is stored as value=73 and unit=%. Given readings arrive at irregular intervals, When normalized, Then readings are bucketed into 1-minute intervals using last-observation-carried-forward for gaps <= 5 minutes and flagged quality=resampled. Given a value/unit combination is invalid (e.g., percent < 0 or > 100), When processing, Then the record is rejected with error_code=UNIT_VALIDATION_ERROR and is not forwarded to downstream consumers.
Record Integrity: De-duplication and Clock Skew Correction
Given two records with the same record_id from webhook retries, When processed, Then only one canonical record is stored and the duplicate is dropped idempotently. Given two readings for the same device where |measured_at_original difference| <= 2 seconds and value+unit are identical, When processed, Then the second is treated as a duplicate and dropped with reason=TIME_WINDOW_DUPLICATE. Given measured_at_original differs from server_received_at by > 120 seconds and gateway_offset_ms is provided, When processed, Then measured_at_corrected = measured_at_original + gateway_offset_ms and quality=clock_skew_corrected. Given measured_at_original differs from server_received_at by > 10 minutes and no offset is provided, When processed, Then measured_at_corrected is capped to server_received_at and quality=clock_skew_capped. Given readings arrive out of order, When persisted, Then the ordered stream for analytics is sorted by measured_at_corrected.
Mapping Readings to Device, Patient, and Upcoming Visit
Given a canonical record with device_id, When processed, Then it is linked to a registered device; if device is unknown, Then the record is quarantined with status=unmapped_device and is not used for predictions. Given a device with an active patient assignment timeline, When measured_at_corrected falls within an assignment window, Then the reading is linked to that patient; if it straddles a reassignment boundary at time T, Then readings < T map to previous patient and >= T map to new patient. Given the patient has scheduled visits, When there is a visit with start_time within the next 6 hours, Then visit_id is set to the nearest upcoming visit by start_time; if multiple candidates exist, Then choose the earliest start_time. Given any mapping action, When applied, Then an audit log entry is written capturing prior and new linkage, actor=system, and timestamp.
Mobile Offline Caching and Retry with Backoff
Given the mobile app is offline, When telemetry or manual entries are created, Then they are cached locally encrypted and queued up to 1,000 records or 72 hours of data, whichever limit is reached first. Given connectivity is restored, When retrying, Then records are sent in FIFO order with exponential backoff (initial 1s, factor 2, max 15m) and 0–20% jitter until acknowledged. Given the server acknowledges a record, When received, Then the record is removed from the local queue; after 5 consecutive 5xx responses the record is marked deferred and a non-blocking UI badge indicates pending sync. Given a record is retried after partial upload, When the server detects duplicate record_id, Then the duplicate is ignored without side effects (idempotent).
Security, HIPAA Minimization, and Encryption
Given any ingestion channel, When transmitting, Then transport security is TLS 1.2+; MQTT/BLE gateways use mTLS client certificates; REST and webhooks use scoped OAuth2/Bearer tokens. Given data is persisted (queues, caches, stores), When at rest, Then it is encrypted with AES-256 and keys are managed via KMS with role-based access controls and audit logging enabled. Given an incoming payload contains PHI fields (e.g., name, DOB, address), When normalizing, Then only the whitelisted fields are retained and all non-whitelisted fields are dropped before storage; a redaction_count is logged. Given credentials are invalid or expired, When a request is made, Then ingestion is rejected with 401/403 and no payload bytes are stored beyond transient memory.
Data Freshness Health Metrics Exposure
Given a device with expected_transmission_interval=300s, When no reading is received after 600s, Then freshness_status=stale and last_seen_age_s reflects the gap. Given ingestion operates under normal load, When measuring, Then the 95th percentile latency from receive_to_persist is <= 60 seconds. Given 24 hours of telemetry, When computing, Then on_time_rate = percentage of expected intervals with >=1 reading and healthy if >= 90%. Given a GET request to /telemetry/health?device_id=..., When authorized, Then the response includes last_seen_at, last_seen_age_s, expected_interval_s, freshness_status, on_time_rate, and ingestion_latency_p95 with HTTP 200.
Depletion Prediction Engine
"As a caregiver, I want accurate predictions of when a patient’s sensor battery will die so that I can plan swaps before my visit and avoid missed vitals."
Description

Deliver a predictive service that estimates days-to-depletion per device using last-seen voltages, historical discharge curves, temperature and usage patterns, and device model characteristics. Output a predicted depletion date with confidence bounds and a health score, updating on new telemetry and at least nightly. Provide model versioning, per-model calibration, fallback heuristics when data is sparse, and configurable horizons (e.g., 1, 3, 7 days). Expose results via API and internal cache with SLAs, include guardrails against stale inputs, and support A/B evaluation for threshold tuning.

Acceptance Criteria
Daily and Event-Driven Prediction Updates
- Given new telemetry for device D is ingested, When processing completes, Then D's prediction is recomputed and written to the internal cache within 5 minutes (p95) and computed_at is updated. - Given no new telemetry in the last 24 hours, When the nightly job runs, Then predictions for 100% of active devices are recomputed by 02:00 UTC and caches are refreshed. - Then GET /predictions?device_id=D reflects the latest recomputation within 1 minute of cache write. - When a recomputation attempt fails, Then the API returns prediction_status='unavailable' with unavailability_reason='compute_error' for device D until the next successful run (no stale prediction is served).
Prediction Output Contract and Data Fields
- Then each API response contains: device_id (string), horizon_days (integer), predicted_depletion_date (ISO-8601 date or null), confidence_lower_date (ISO-8601 date or null), confidence_upper_date (ISO-8601 date or null), confidence_level (float in (0,1], default 0.8), health_score (integer 0–100 or null), model_version (string), prediction_status (enum: predicted|heuristic|unavailable), unavailability_reason (nullable string), computed_at (ISO-8601 timestamp). - Then confidence_lower_date <= predicted_depletion_date <= confidence_upper_date when prediction_status in {'predicted','heuristic'}; otherwise these fields are null. - Then health_score is present and in [0,100] when prediction_status in {'predicted','heuristic'}; otherwise null. - Given a requested horizon_days in {1,3,7}, Then the response uses that horizon; When omitted, Then horizon_days defaults to 7. - Then all date/timestamp fields are UTC ISO-8601 and pass JSON schema validation; responses are immutable for a given device_id+horizon+computed_at tuple.
Fallbacks for Sparse or Stale Inputs
- Given device D has fewer than 3 voltage readings or last_telemetry_at > 24h ago, When a prediction is produced, Then prediction_status='heuristic' and prediction_method='fallback_v1' is included, and the confidence interval width is at least 2x the median width of model-based predictions for the same device model over the past 7 days. - Given last_telemetry_at > 72h ago or there is no voltage data, Then prediction_status='unavailable' with unavailability_reason in {'stale_input','insufficient_data'} and all prediction/bounds fields are null. - Guardrail: No response is served with computed_at older than 24h; in such cases, prediction_status='unavailable' and unavailability_reason='stale_prediction' is returned. - Given temperature or usage features are missing, Then the model proceeds using available signals without error and records missing_features array in the response.
Configurable Horizons (1, 3, 7 Days) and Risk Flags
- Given a request with horizons=1,3,7, Then the API returns per_horizon entries for each requested value, each with predicted_depletion_date, confidence bounds, and risk_flag (true iff depletion is predicted within the horizon). - Given conservative_mode=true, Then risk_flag is based on confidence_upper_date <= now + horizon_days; otherwise it is based on predicted_depletion_date. - Given no horizons parameter, Then the API returns a single entry for horizon_days=7. - Then cache keys include device_id, horizon_days, and conservative_mode to avoid cross-horizon contamination; cache hit ratio >= 90% for read traffic under steady state.
API and Cache Performance SLAs
- For cached reads of GET /predictions, P95 latency <= 300 ms and P99 <= 600 ms over rolling 1-hour windows. - For recomputations triggered by new telemetry, P95 end-to-end time (ingest to cache write) <= 5 minutes; P99 <= 10 minutes. - Prediction cache TTL is 15 minutes; no response returns a cache entry older than TTL unless prediction_status='unavailable' and unavailability_reason='stale_prediction'. - Service availability for GET /predictions >= 99.9% monthly. - Nightly coverage: >= 95% of active devices have prediction_status in {'predicted','heuristic'} after the nightly job completes.
Model Versioning, Rollout, and Calibration Quality
- Every response includes model_version and calibration_id that map to entries in the model registry; absence causes the request to fail validation in non-production and to return prediction_status='unavailable' with unavailability_reason='config_error' in production. - The serving stack supports side-by-side operation of at least two model versions; per-device routing is determined by experiment_id or server rule and is observable in logs/metrics. - Rollout controls allow promoting or rolling back the default model_version within 10 minutes without downtime; all changes are audit-logged with actor and timestamp. - On a rolling 30-day holdout set: MAE for 7-day horizon <= 1.5 days and for 3-day horizon <= 1.0 day. - Calibration: For nominal 80% intervals, observed coverage is within [72%, 88%]; for nominal 95% intervals, observed coverage is within [90%, 98%].
A/B Evaluation for Threshold Tuning
- The system supports assigning devices to cohorts {'control','test'} with configurable split ratios (e.g., 50/50) and persists assignments for at least 30 days. - High-risk flag thresholds (per horizon and per device model) are configurable per cohort; changes propagate to serving within 5 minutes and are audit-logged. - Daily metrics per cohort and horizon are computed and retrievable via API: precision, recall, FPR for risk_flag predicting true depletion within horizon, and calibration coverage error; requests return within 500 ms (P95). - An endpoint exists to promote a cohort's thresholds to default; operation is idempotent and completes within 2 minutes.
High-Risk Scoring & Thresholds
"As a clinical coordinator, I want configurable risk tiers for device batteries so that I can align alerts and actions with patient criticality and reduce false alarms."
Description

Classify devices as low/medium/high risk for the current and upcoming visit windows based on predicted depletion date, confidence, and data freshness. Allow configurable thresholds per device type and care program, including conservative modes for critical patients. Include rules for stale or missing telemetry (e.g., auto-elevate risk). Provide overrides and justifications, with audit of who changed thresholds and when.

Acceptance Criteria
Risk classification for current and upcoming visits using predicted depletion
Given device type "BP Cuff" in care program "CHF" with configured thresholds: High if predicted depletion is on/before end of current visit OR ≤24h before start of next scheduled visit; Medium if >24h and ≤72h before start of next scheduled visit; Low if >72h before start of next scheduled visit And the current visit window is 2025-09-05T14:00Z–15:00Z And the next scheduled visit window is 2025-09-06T14:00Z–15:00Z And the model predicts depletion at 2025-09-06T10:00Z with sufficient data freshness and confidence When the risk engine evaluates the device Then the device is classified High risk And the risk classification includes the visit window used (current/upcoming) and timestamps considered
Threshold configuration by device type and care program with conservative mode
Given default global thresholds exist And for device type "Glucometer" in care program "Critical Care" the standard thresholds are configured as: High ≤24h, Medium 24–72h, Low >72h before next visit start And a conservative mode for the program is configured as: High ≤48h, Medium 48–96h, Low >96h before next visit start And a device of type "Glucometer" is assigned to a patient in the "Critical Care" program And the model predicts depletion 36h before the next visit start When conservative mode is enabled for the program or patient Then the device is classified High risk using the conservative thresholds And when conservative mode is disabled Then the device is classified Medium risk using the standard thresholds And device-type+program thresholds take precedence over global defaults; if missing, global defaults are applied
Auto-elevation on stale or missing telemetry
Given a freshness threshold of 12h and a missing-telemetry threshold of 24h configured for device type "Pulse Oximeter" And the last-seen telemetry for the device is 14h ago And the predicted depletion otherwise maps to Low risk by time thresholds When the risk engine evaluates the device Then the device risk is auto-elevated to at least Medium due to stale telemetry And if last-seen telemetry exceeds 24h (missing), the device risk is High regardless of predicted depletion time And the risk record stores the freshness thresholds evaluated and the reason code "STALE_TELEMETRY" or "MISSING_TELEMETRY"
Confidence-aware risk escalation
Given a confidence threshold of 0.70 configured for prediction reliability And a device’s time-based risk evaluates to Medium using thresholds And the prediction confidence is 0.55 (<0.70) When the risk engine evaluates the device Then the device risk is escalated by one level (to High) due to low confidence, capped at High And if the time-based risk is Low and confidence <0.70, escalate to Medium And if confidence ≥0.70, do not escalate And the risk record stores the applied confidence, threshold, and whether escalation occurred
Manual risk override with justification and scope
Given a user with role "Supervisor" has permission to override device risk And a device currently classified Medium When the user submits an override to set risk to Low with scope "this device" and duration 72h and justification "New batteries installed" Then the system accepts the override only if a non-empty justification is provided And the displayed risk becomes Low with an indication it is overridden and the expiration timestamp And an audit entry is created capturing user, timestamp, previous risk, new risk, scope, duration, and justification And upon expiration or manual revert by an authorized user, the risk automatically reverts to the computed value And unauthorized users or overrides without justification are rejected with an error
Threshold change auditing and export
Given an admin updates High/Medium/Low thresholds for device type "BP Cuff" in care program "CHF" When the admin saves the change Then an immutable audit entry is recorded with user, timestamp, scope (device type, program), previous values, new values, and change reason (optional) And subsequent risk calculations use the new thresholds; calculations prior to the change remain based on old thresholds And the audit history can be filtered by date range, device type, and program and exported to CSV with all captured fields
Schedule Flagging & Swap Suggestions
"As a caregiver on today’s route, I want high-risk devices highlighted on my schedule with clear swap suggestions so that I can address them efficiently during the visit."
Description

Integrate risk outcomes into the daily schedule and route views by flagging visits linked to high-risk devices with clear, accessible indicators. Surface inline suggestions such as "Swap battery at start of visit" with estimated time impact and required battery type. Provide quick actions to confirm swap done, view device details, and re-check prediction post-swap. Ensure mobile-first UI, offline tolerance, and WCAG-compliant color/contrast with icon + text redundancy.

Acceptance Criteria
High-Risk Visit Flagging in Daily Schedule and Route Views
Given a caregiver opens today's schedule or route view on a mobile device (viewport width 320–414 px), When a visit includes any device with a predicted battery depletion within 72 hours, Then the visit displays a high-risk badge with icon plus text label "Battery at risk", And the badge color and text contrast ratio is >= 4.5:1, And the same badge appears in the route list and the map callout for that visit, And the badge is visible without horizontal scrolling at 320 px width, And the badge is focusable and announced by screen readers with the accessible name "Battery at risk".
Inline Swap Suggestion Content and Accuracy
Given a visit is flagged high-risk, When the caregiver taps the badge or expands the visit row, Then an inline suggestion displays the action "Swap battery at start of visit", And it lists the required battery type(s) and quantity per device, And it shows an estimated time impact in whole minutes (1–10), And all values are populated from device model metadata and configuration, And if multiple devices are linked, the suggestion enumerates each device with its specific battery type and quantity.
Quick Actions — Confirm Swap, View Details, Re-check Prediction
Given the inline suggestion is visible, When the caregiver taps "Confirm swap", Then the system records the event with user ID, visit ID, device ID(s), and timestamp to local storage within 100 ms, And a success toast "Swap recorded" appears within 1 second, And the visit badge updates within 5 seconds based on recalculated risk; if risk falls below threshold, the badge is removed. Given the inline suggestion is visible, When the caregiver taps "View device details", Then a panel opens showing last-seen voltage, last check-in timestamp, firmware version (if available), and battery type, And all fields are labeled and readable at 320 px width. Given the inline suggestion is visible and connectivity is available, When the caregiver taps "Re-check prediction", Then a new prediction request is sent and returns within 5 seconds, And the UI updates the risk status accordingly; if connectivity is unavailable, the action is disabled with helper text "Re-check requires connection".
Offline Tolerance for Flags, Suggestions, and Swap Confirmation
Given the app has successfully synced within the last 24 hours, When the device is offline, Then high-risk badges and inline suggestions render from cached data, And "Confirm swap" is available and queues the event locally, And "Re-check prediction" is disabled with explanatory helper text, And upon reconnect, queued swap events sync to the server within 60 seconds and show a success confirmation, And the risk status refreshes automatically within 10 seconds of successful sync.
WCAG Compliance and Touch Target Standards
Given any screen containing risk badges and quick actions, When evaluated against WCAG 2.1 AA, Then text contrast is >= 4.5:1 and icon/graphics contrast is >= 3:1, And risk state is conveyed with icon + text (not color alone), And all interactive elements have a minimum 44x44 dp touch target, are keyboard-focusable in a logical order, and expose accessible names and roles that match visible labels.
Mobile-First Layout and Performance Budget
Given a mid-tier mobile device on an average cellular network, When opening today's schedule containing at least 10 visits with 3+ flagged items, Then First Contentful Paint occurs in <= 2.5 seconds, And rendering badges and inline suggestions adds <= 300 ms to interactive readiness, And cumulative layout shift caused by badge/suggestion rendering is <= 0.1, And additional downloaded UI payload attributable to this feature is <= 50 KB compressed.
Caregiver Prep Spare Reminders
"As a caregiver, I want my prep checklist to include the right number and type of spare batteries so that I don’t run out in the field."
Description

Automatically add spare battery reminders to the caregiver’s prep checklist based on the day’s assigned visits and predicted depletion risk, calculating required quantities and battery types. Sync reminders to mobile, allow print/export, and track completion status. Adjust reminders dynamically if routes change, and avoid duplicate suggestions across clustered visits. Provide admin configuration for default spare counts and vendor-specific battery mappings.

Acceptance Criteria
Auto-Add Spare Battery Reminders for High-Risk Devices
Given a caregiver has assigned visits for the current day And Battery Scout flags one or more devices on those visits as high-risk for depletion during the visit window When the prep checklist is generated or refreshed Then the system creates spare battery reminders for each required battery type And each reminder includes battery type, required quantity, associated visit identifiers, and caregiver identifier And required quantity is calculated according to the active admin default rules for that battery type
De-duplicate and Consolidate Reminders Across Same-Day Visits
Given multiple same-day visits on a caregiver’s route require the same battery type When generating prep reminders Then only one reminder per caregiver per battery type is created And the quantity equals the count of distinct high-risk devices of that type across the route plus the configured default spare count for that type And no duplicate reminders for the same battery type appear on the checklist
Dynamic Updates on Route Changes
Given a caregiver’s assigned visits are added, removed, or reassigned, or Battery Scout risk flags change When the change is saved Then spare battery reminders are recalculated and updated within 60 seconds And obsolete reminders are removed And new or changed reminders are synced to the caregiver’s mobile app
Sync to Mobile and Print/Export Availability
Given a prep checklist with spare battery reminders exists When the caregiver opens the mobile app while online Then reminders appear within 30 seconds of generation or update And each reminder displays battery type and required quantity And the checklist can be exported to PDF and CSV including caregiver name, date, battery type, quantity, and related visit references
Completion Status Tracking
Given spare battery reminders are present on the prep checklist When a caregiver marks a reminder as complete on mobile or web Then the reminder status changes to Complete with timestamp and user ID And the completion status syncs across devices within 30 seconds And completed reminders remain visible and are read-only
Admin Configuration for Counts and Vendor Mappings
Given an admin with configuration permissions When they set default spare counts per battery type and vendor-to-battery-type mappings Then the system validates and saves the settings (non-negative counts, valid battery type codes) And newly generated reminders apply the updated settings immediately And existing not-yet-completed reminders are recalculated within 5 minutes And changes are audit-logged with admin user and timestamp
Alerting & Escalation
"As an operations manager, I want timely alerts and escalations for imminent battery failures so that my team can intervene before visits are impacted."
Description

Send proactive notifications when a device is predicted to deplete within configurable horizons or during a scheduled visit window. Support in-app, push, and SMS channels with quiet hours, rate limiting, and opt-in/out preferences. Provide escalation to on-call supervisors when high-risk alerts are unacknowledged, with clear context (patient, device, predicted depletion date, next visit). Localize content and keep messages HIPAA-minimal while actionable.

Acceptance Criteria
Configurable Prediction Horizon Alerts
Given the organization-level prediction horizon is set to 72 hours When a device is predicted to deplete within 72 hours based on usage patterns and last-seen voltage Then the system generates one alert incident for that device and associated patient Given a device is predicted to deplete outside the configured horizon When alert evaluation runs Then no alert is generated Given the prediction horizon is updated by an admin When the scheduler next evaluates devices Then alerts are recalculated to match the new horizon without duplicating existing incidents
Visit Window Risk Alerts
Given a scheduled visit for the patient overlaps the predicted depletion window When the daily schedule is published or refreshed Then the patient's visit is marked High-Risk in the schedule and an alert is sent to the assigned caregiver before their shift start Given the caregiver is currently checked in to an active visit When the device is predicted to deplete before the visit end Then an immediate in-app alert is delivered to the caregiver device
Channel Delivery and Preferences
Given a user has opted in to in-app and push but opted out of SMS When an alert is generated for that user Then the alert is delivered via in-app and push only Given the user's push token is invalid When an alert is sent Then push is skipped with no retry storm and delivery continues via other opted-in channels Given the user's phone number is unverified When an alert would be sent via SMS Then SMS is not sent and the event is logged
Quiet Hours and Deferred Delivery
Given a recipient's quiet hours are set from 21:00 to 07:00 local time When an alert is generated at 22:15 Then push and SMS are suppressed and the alert is queued for delivery after 07:00, subject to rate limiting Given quiet hours end at 07:00 When queued alerts are released Then only one notification per incident per channel is sent at or after 07:00 Given in-app notifications are permitted during quiet hours When an alert is generated during quiet hours Then only the in-app badge or inbox entry is updated without audible or visual push/SMS interruptions
Rate Limiting and Deduplication
Given the rate limit is configured as 1 notification per incident per user per channel per 6 hours When five matching alerts are triggered within 6 hours Then only the first notification per channel is delivered and the others are suppressed Given notifications are suppressed by rate limiting When delivery outcomes are logged Then each suppression is recorded with reason "rate_limited" and the incident ID Given the rate limit window elapses When the incident remains active and a new evaluation triggers Then one additional notification per channel may be sent
Escalation to On-Call Supervisor on Unacknowledged High-Risk Alerts
Given a high-risk alert is addressed to a caregiver and the escalation delay is configured as 15 minutes When the alert is not acknowledged in-app within 15 minutes of the last delivery Then the on-call supervisor for the patient's team is escalated via their opted-in channels with the alert context Given the alert is acknowledged by any assigned caregiver When the system checks escalation Then no escalation is sent and any pending escalation is canceled Given an escalation has been sent and the escalation cooldown is 60 minutes When additional duplicate alerts occur within 60 minutes for the same incident Then no further escalations are sent
Localized, HIPAA-Minimal, Actionable Content
Given the recipient's preferred language is Spanish (es) When the alert is delivered Then the message content uses the Spanish template for the alert type Given an alert is delivered via push or SMS When the message is rendered Then it includes only: patient alias, device type/model, predicted depletion date/time with timezone, next visit date/time, and a short action phrase; it excludes PHI such as full name, DOB, full address, or detailed clinical notes Given an alert is viewed in-app by an authorized user When the user taps the notification Then the in-app detail includes full context and the ability to acknowledge the alert, view the next visit, and open the patient/device record Given localization files are missing for a recipient locale When an alert is sent Then the message falls back to English with correct variable interpolation
Audit Log & Compliance Reporting
"As a compliance officer, I want an audit trail of battery risk decisions and actions so that I can prove we took reasonable steps to prevent missed vitals."
Description

Record prediction inputs, model version, outputs, thresholds applied, alerts sent, user overrides, and swap confirmations with timestamps and actor identity. Integrate with CarePulse’s one-click, audit-ready reports to demonstrate proactive risk management and reduce denial risk due to missed vitals. Provide export to CSV/PDF, retention policies, and integrity checks to ensure reports are defensible during audits.

Acceptance Criteria
Prediction Event Logging Completeness
Given Battery Scout generates a prediction for a device When the prediction is persisted Then the audit log entry includes device_id, org_id, prediction_id, model_version, input_features (names and values), thresholds_applied, outputs (risk_level, predicted_depletion_date, confidence), computation_timestamp (ISO 8601), actor identity, and correlation_id And the entry is immutable with a unique audit_id and created_at timestamp And the entry is queryable via the audit API within 2 seconds of persistence
Alert Creation and Delivery Audit Trail
Given a prediction meets or exceeds the high-risk threshold When an alert is generated and routed Then the audit log captures alert_id, originating prediction_id, recipients, delivery channels, message_template_id with resolved parameters, and created_at timestamp And delivery attempts per channel are recorded with status (Delivered/Failed), attempt timestamps, and error codes if any And alert suppressions due to user preferences or schedule exclusions are logged with suppression_reason and actor identity
User Override Traceability
Given a user chooses to override a prediction or alert (acknowledge, snooze, false positive, escalate) When the override is submitted Then the audit log records override_id, override_type, reason (minimum 10 characters), actor_id and role, timestamp, and linked prediction_id or alert_id And the effect on future alerts (e.g., snooze_until, escalation_target) is recorded And overrides without a reason are rejected with a validation error and no audit log entry is created
Battery Swap Confirmation Auditability
Given a caregiver confirms a battery swap for a device When the confirmation is submitted Then the audit log stores swap_id, device_id, caregiver_id, location (GPS or site_id), timestamp, pre_swap_voltage and post_swap_voltage with data_source (sensor/manual) when available, and links to preceding alert/prediction And the system requires at least one voltage reading (pre or post) to proceed and logs the validation outcome And open high-risk alerts for the device are auto-resolved with resolution="Swap Confirmed" and the resolution is logged
Retention Policy Enforcement and Legal Hold
Given the organization has configured an audit log retention period When an entry reaches end-of-retention Then the system purges the entry and creates a purge_receipt with purged_ids/count, timestamp, actor="system", and a cryptographic hash of the purged content And non-admin attempts to delete or edit audit entries before expiry are blocked and logged as security events And admins can place and remove legal holds scoped by org/device/date range, preventing purge; hold placement/removal is logged with actor and timestamp
Integrity Verification and Tamper Evidence
Given audit entries are append-only and hash-chained When the daily integrity verification job runs Then the system validates the chain and emits an integrity report with status (Pass/Fail), mismatch_count, affected_audit_ids, and timestamp And any detected mismatch triggers a High severity incident alert and is logged with correlation_id And exports include a manifest with root hash and per-entry hashes to enable offline verification
Audit-Ready CSV/PDF Export and Report Integration
Given an authorized user requests an audit export for a date range and organization When the one-click report is generated Then CSV and PDF outputs contain timestamps, actor identity, device identifiers, model_version, inputs, outputs, thresholds, alerts, overrides, swaps, integrity hashes, and retention status And for up to 100k entries the export completes within 60 seconds; larger datasets queue a job that completes within 15 minutes And the request and download events are themselves logged, and the PDF includes page numbers and "Audit-Ready" watermark

Tap Test

A 15‑second pre-visit diagnostic that verifies pairing, signal strength, last reading time, clock sync, and a sample value check. Clears green when ready or offers step-by-step self-heal actions when not. Cuts room-entry troubleshooting and speeds caregivers to care tasks.

Requirements

One-Tap Diagnostic Orchestrator (15s SLA)
"As a caregiver about to start a visit, I want a single tap that runs all readiness checks within 15 seconds so that I can confirm everything is ready without delaying care."
Description

Implements a single-tap, time-boxed diagnostic that runs all checks (device pairing, connectivity, clock sync, last reading freshness, and sample value sanity) in parallel with a hard 15-second completion target. Orchestrates asynchronous probes with graceful timeouts, aggregates results into a normalized readiness payload, and exposes a consistent interface to the app shell. Provides progress feedback, cancellability, and resilient retries within the time budget. Supports offline-degraded behavior where only local checks run. Integrates with device SDKs, CarePulse’s session context (scheduled visit, caregiver, patient), and telemetry for performance monitoring. Ensures permissions prompts are pre-fetched and handled without blocking the SLA. Outcome is a deterministic pass/warn/fail signal consumed by UI and compliance logs.

Acceptance Criteria
SLA Time-Boxed Execution and Cancellation
Given a caregiver is in an active CarePulse session and Tap Test is available When they initiate the One-Tap Diagnostic Then orchestration begins within 200ms of the tap And a final outcome is produced in <= 15,000ms measured by a monotonic device clock And no probe continues executing after 15,000ms from initiation And if the caregiver cancels during execution, all probes terminate within 500ms, resources are released, and the run is marked 'cancelled' (not logged as a readiness result)
Parallel Probes, Timeouts, and Retries Within Budget
Given the required checks are pairing, connectivity, clock sync, last reading freshness, and sample value sanity When orchestration starts Then all checks are dispatched concurrently without mutual blocking And each check has a first-attempt timeout of 3,000ms and may retry once only if sufficient time remains to keep total runtime <= 15,000ms And timed-out or exhausted-retry probes record status 'timed_out' or 'failed' with a reason_code And the orchestrator continues to completion regardless of individual probe timeouts or failures
Normalized Readiness Payload and Deterministic Outcome Mapping
Given a diagnostic run completes Then the orchestrator returns a JSON payload including: version, session_id, caregiver_id, patient_id, started_at, finished_at, duration_ms, outcome in {pass,warn,fail}, offline_degraded boolean, and checks[] with {id in {pairing,connectivity,clock_sync,last_reading_freshness,sample_value_sanity}, status in {pass,warn,fail,timed_out,skipped}, attempt_count, duration_ms, reason_code, reason_detail} And outcome mapping is deterministic: if any of {pairing, connectivity, clock_sync} has status fail or timed_out => outcome=fail; else if any check has status warn, skipped, or timed_out (non-critical) => outcome=warn; else => outcome=pass And the payload schema is versioned (semver) and remains backward compatible across minor versions
Offline-Degraded Behavior
Given the device has no internet connectivity When the diagnostic is run Then only local checks execute (pairing; last_reading_freshness from local cache; sample_value_sanity if device SDK supports local sample; clock_sync vs device clock) And cloud-dependent substeps are marked 'skipped' with reason_code 'offline' And the run completes in <= 15,000ms and sets offline_degraded=true And the outcome is 'warn' if all critical local checks pass; otherwise 'fail'
Permission Pre-Fetch and Non-Blocking SLA
Given required OS permissions (e.g., Bluetooth, microphone) may be needed for probes When the app is idle before the Tap Test or on first session activation Then the app requests permissions proactively so no OS permission dialog appears during the Tap Test And when the caregiver starts the Tap Test, no permission prompt blocks execution And if mandatory permissions are still missing at start, the run aborts within 2,000ms with outcome 'fail' and reason_code 'permissions_missing' and emits guidance via the progress channel
Telemetry and Performance Monitoring
Given any diagnostic run starts or ends Then telemetry events are emitted: tap_test_start, tap_test_probe_result (per probe), and tap_test_complete with fields {duration_ms, outcome, probe_durations, probe_statuses, retry_counts, offline_degraded, device_model, os_version, app_version, network_type} And 100% of runs record duration_ms and outcome And an error event is emitted for any run approaching or exceeding 15,000ms with {timeout_stage} And dashboards compute p50/p95/p99 for duration_ms and probe durations And an alert triggers if any run exceeds 15,000ms in a rolling 15-minute window
App Shell Interface and Progress Feedback
Given the app shell invokes the orchestrator with a session context and subscribes to progress events When the run executes Then progress events are emitted at least every 1,000ms with {phase in [initializing, pairing, connectivity, clock_sync, last_reading, sample_check, aggregating], percent_complete 0-100, message_key} And self-heal actions are emitted when a check is not ready, with {action_key, steps[]} And a single terminal event is emitted exactly once with the final outcome and payload And the public interface (method signature, event shapes, payload schema version) remains consistent across minor releases
Device Pairing & Sensor Presence Check
"As a caregiver using connected sensors, I want the app to verify that my phone is paired with the required devices so that I don’t enter the room and discover missing connections."
Description

Detects and verifies the presence and pairing status of required sensors (e.g., BLE, NFC, or Wi-Fi devices) for the upcoming visit. Confirms pairing tokens, supported services, battery level, and firmware compatibility against a maintained device support matrix. Maps detected devices to the patient’s care plan requirements and flags missing or mismatched devices. Normalizes vendor-specific responses into standardized states (paired, discoverable, missing, incompatible) and surfaces remediation hints. Writes results to the visit context and caches device identities in the caregiver’s device registry for faster subsequent checks.

Acceptance Criteria
Auto-Detect and Validate Required Sensors for Scheduled Visit
Given a scheduled visit with device requirements defined in the patient's care plan And the caregiver initiates Tap Test within the CarePulse mobile app And the mobile device radios (Bluetooth, NFC, Wi‑Fi) are enabled When the system performs a discovery scan using supported protocols for up to 12 seconds total Then all devices matching required types within range are detected and listed with RSSI and last-seen timestamps And each required device is assigned a state of paired, discoverable, missing, or incompatible And the Tap Test summary shows Ready only if 100% of required devices are paired or discoverable and none are incompatible And the detection step completes within 15 seconds on a baseline-supported mobile device
Pairing Token and Service Verification Against Support Matrix
Given a detected device candidate And the device support matrix includes entries for its vendor, model, and firmware When the app requests pairing tokens/keys and enumerates services (e.g., GATT services/characteristics, NFC NDEF records, Wi‑Fi service descriptors) Then the token validity and required services are validated against the matrix And if any required token is missing or expired, the device is marked incompatible with reason code token_invalid And if any required service is absent, the device is marked incompatible with reason code service_missing And if validations pass, the device is marked paired (if bond exists) or discoverable (if bond not yet established)
Battery Level and Firmware Compatibility Check
Given a detected device that exposes battery level and firmware version When battery percentage and firmware version are read Then battery is compared to the minimum threshold defined for the device type in the support matrix And firmware is compared to the supported version range in the matrix And if battery is below threshold, the device is marked incompatible with reason code battery_low and the measured percentage is recorded And if firmware is out of range, the device is marked incompatible with reason code firmware_unsupported and the installed version is recorded And if both checks pass, the device's prior state (paired or discoverable) is preserved
Care Plan Device Mapping and Mismatch Flagging
Given the patient's care plan lists required device types and any specific device IDs And devices have been detected and validated When mapping detected devices to care plan requirements Then for each required type, at least one compatible detected device is associated; otherwise the requirement is flagged missing And if a specific device ID is required and a different vendor/model or ID is detected, the requirement is flagged mismatched with reason id_mismatch And devices not required by the care plan are ignored in readiness scoring but listed as extra And a consolidated list of missing and mismatched items is produced in the Tap Test results
Normalization of Vendor Responses to Standardized States
Given vendor-specific response codes, advertising flags, characteristics, and error conditions from detected devices When the normalization layer processes these responses Then each device is mapped to exactly one of: paired, discoverable, missing, incompatible And a machine-readable reason code is attached for incompatible, chosen from [token_invalid, service_missing, battery_low, firmware_unsupported, radio_off, signal_weak, unknown] And unknown or unmapped responses default to incompatible with reason unknown And normalization rules are unit-tested with fixtures covering at least 90% of known vendor variations
Remediation Hints Display for Missing or Incompatible Devices
Given one or more devices are missing or incompatible When Tap Test completes Then the caregiver is shown step-by-step remediation hints tailored to each reason code (e.g., enable Bluetooth for radio_off, move closer for signal_weak, charge device for battery_low, update firmware for firmware_unsupported) And each hint includes an actionable step and an expected outcome to verify success And selecting a hint triggers the relevant OS setting or in-app guide where applicable And the caregiver can re-run the pairing/sensor check from the same screen without navigating away
Write Results to Visit Context and Cache Device Identities
Given Tap Test has produced device states and mappings When results are saved Then the visit context record is updated with timestamp, device IDs, vendor/model, state, and any reason codes, linked to the scheduled visit And the caregiver's device registry is updated with confirmed device identities and capabilities for faster subsequent checks And on a subsequent Tap Test for the same patient within 7 days with the same device roster, the discovery phase leverages cached identities and completes in 7 seconds or less And an audit log entry records user, timestamp, and changes to device states
Connectivity & Signal Strength Assessment
"As a caregiver working in areas with spotty coverage, I want a clear signal check so that I know whether syncing and sensor reads will work before I start the visit."
Description

Assesses network and local radio conditions relevant to the visit. Measures internet reachability (API ping/latency), Wi-Fi/cellular availability, and expected stability. For sensors, samples BLE RSSI and connection quality to estimate reliability within the care environment. Applies configurable thresholds to classify results as pass/warn/fail and provides actionable guidance (move closer to router, switch to cellular, reposition near sensor). Operates within the 15-second SLA, avoids excessive battery drain, and records metrics for route-level coverage insights. Degrades gracefully when offline by running only local checks and flagging sync risks.

Acceptance Criteria
Pre-Visit Tap Test connectivity completes within SLA
Given the device is on the Tap Test screen and a visit context is loaded When the Connectivity & Signal Strength Assessment starts Then API reachability is probed with up to 3 pings and the best latency is recorded And Wi‑Fi and cellular availability are detected with current network type identified And overall classification is computed within 15 seconds total wall time (P95 ≤ 15s; hard timeout at 15s) And results are labeled Pass/Warn/Fail according to configured thresholds And the user sees a green ready state only when all checks meet Pass thresholds
BLE sensor RSSI and connection quality classification
Given a target sensor is paired or discoverable for the visit When a BLE scan runs for up to 5 seconds and samples RSSI at least 3 times Then the median RSSI and connection attempt outcomes (≥2 attempts) are captured And BLE reliability is classified: Pass (RSSI ≥ -70 dBm and ≥1 successful connect), Warn (-85 dBm ≤ RSSI < -70 dBm or intermittent connects), Fail (RSSI < -85 dBm or 0/2 connects) And the classification and measurements are returned within the 15-second Tap Test budget
Configurable thresholds applied with safe defaults and logging
Given remote config provides thresholds for API latency, packet loss, RSSI, and connect retries When the Tap Test starts Then the latest non-expired config is loaded and applied without app restart And if remote config is unavailable, documented defaults are used (e.g., API Pass ≤ 300 ms; Warn ≤ 800 ms; Fail > 800 ms) And the applied config version and timestamp are recorded with the test results
Offline mode local checks and sync risk flagging
Given the device has no internet reachability When the Connectivity & Signal Strength Assessment runs Then only local checks execute (radio state, current network type, BLE RSSI/quality) And network pings are skipped And the result is marked with a Sync Risk flag and classification excludes API latency And the Tap Test still completes within 15 seconds
Actionable guidance on Warn/Fail outcomes
Given any sub-check returns Warn or Fail When results are displayed Then the user is shown step-by-step guidance mapped to the failing dimension(s), such as: - Wi‑Fi weak: “Move closer to the router (≤ 3 m) or switch to cellular” - High API latency: “Toggle Airplane Mode off/on, then retry” - BLE weak: “Reposition within 1–2 m of the sensor and remove obstacles” And each guidance item includes a single-tap Retry action that reruns only the impacted checks And applying a fix and tapping Retry updates the classification accordingly
Metrics recorded for route-level coverage insights
Given a Tap Test completes (any outcome) When results are saved Then the following fields are persisted and queued for upload: route/visit ID, timestamp, network type, API latency stats (min/median), ping success count, BLE median RSSI, BLE connect success count, overall classification, Sync Risk flag, config version, and total duration And if offline, the payload is stored locally and uploaded at-least-once when connectivity returns And sensitive fields are minimized and device identifiers are hashed before upload
Battery and resource usage within budget
Given the Tap Test runs on a reference mid-tier device When all connectivity and BLE checks execute Then total energy impact is ≤ 0.5% battery per run and CPU utilization averages < 20% during the 15-second window And BLE scanning does not exceed 5 seconds and at most 2 connection attempts are made by default And no background services remain active after completion
Clock Sync & Last Reading Validation
"As an operations manager, I want Tap Test to validate clock sync and last-reading freshness so that our visit documentation and audit trail remain accurate and compliant."
Description

Validates device clock alignment and data freshness. Compares mobile OS time to trusted sources (server/NTP) to detect drift beyond a configurable threshold, and confirms sensor clocks where supported. Checks the latest available reading timestamp per required device against care plan freshness windows to ensure audit-ready timing. Handles time zones, daylight saving, and offline scenarios by queuing a resync when back online. Provides remediation options (auto time sync, prompt to enable network time) and writes any detected drift and freshness status into the diagnostic results for compliance reporting.

Acceptance Criteria
Online Device Clock Drift Detection and Auto-Remediation
Given the mobile device has internet connectivity and a trusted server/NTP time source is reachable And a drift_threshold_seconds is configured When the Tap Test runs clock validation Then the system computes device_clock_drift_seconds = abs(device_time_utc - server_time_utc) And marks Clock Sync status Pass when device_clock_drift_seconds <= drift_threshold_seconds And marks Clock Sync status Fail when device_clock_drift_seconds > drift_threshold_seconds And when Fail and auto_time_sync is permitted, the app attempts to enable network time and perform one time sync And when auto_time_sync is not permitted, the user is prompted to enable network time And after remediation (if any), the system re-measures drift and updates the status and measured value in the diagnostics And the diagnostic output includes device_clock_drift_seconds and time_source (server|ntp) for compliance
Sensor Clock Verification for Supported Devices
Given a connected sensor that exposes a readable device clock And a sensor_clock_drift_threshold_seconds is configured When the Tap Test reads the sensor clock and a trusted time source Then the system computes sensor_clock_drift_seconds = abs(sensor_time_utc - trusted_time_utc) And marks the sensor Clock Sync status Pass when sensor_clock_drift_seconds <= sensor_clock_drift_threshold_seconds And marks the sensor Clock Sync status Fail when sensor_clock_drift_seconds > sensor_clock_drift_threshold_seconds And when Fail and sensor_time_sync_supported, the app offers self-heal actions to sync the sensor time or re-pair the sensor And the diagnostic output records per-sensor: sensor_id, sensor_clock_drift_seconds, sync_supported (true|false), action_taken (if any), and pass_fail
Reading Freshness Validation Against Care Plan Windows
Given the care plan defines required_devices with freshness_window_minutes per device type And a last_reading_timestamp_utc exists for some devices and may be missing for others When the Tap Test evaluates data freshness Then for each required device it computes age_minutes = (now_utc - last_reading_timestamp_utc) in minutes And marks Freshness status Pass when age_minutes <= freshness_window_minutes And marks Freshness status Fail with reason "STALE" when age_minutes > freshness_window_minutes And marks Freshness status Fail with reason "NO_READING" when no last reading exists And the diagnostic output lists for each required device: device_type, device_id (if applicable), last_reading_timestamp_local, last_reading_timestamp_utc, age_minutes, freshness_window_minutes, and pass_fail with reason
Time Zone and Daylight Saving Correctness
Given the mobile device time zone and DST rules are identified And all internal calculations use UTC-normalized timestamps When a last reading or clock comparison spans a time zone change or a DST transition Then age and drift calculations remain correct by using UTC and do not change due to local offset shifts And displayed local timestamps include the correct local offset and DST flag at the event time And no Freshness Fail is generated solely due to a DST hour shift or time zone boundary crossing
Offline Clock Validation with Queued Resync
Given the device is offline and trusted time sources are unreachable And a last_known_server_offset_seconds exists that is no older than max_offset_age_minutes When the Tap Test runs clock validation Then the system estimates device_clock_drift_seconds using last_known_server_offset_seconds and records method = "estimated" And marks the clock validation result as provisional = true in the diagnostics And a resync job is queued with payload including initiated_at_utc, reason = "offline_clock_validation", and required_checks = ["clock","freshness"] And upon network reconnect, the app performs a time resync within 10 seconds and re-runs the required checks And the provisional diagnostic entries are replaced with final results and provisional = false
Diagnostic Results for Compliance Reporting
Given a Tap Test run completes When compiling diagnostic results Then the record includes: tap_test_id, run_started_at_utc, run_completed_at_utc, time_source (server|ntp|estimated), device_clock_drift_seconds, per-sensor sensor_clock_drift_seconds (if available), per-required-device freshness (last_reading_timestamp_utc, last_reading_timestamp_local, age_minutes, window_minutes, pass_fail, reason), timezone_id, dst_in_effect flag at event times, remediation_actions with outcomes, and overall pass_fail And the record is persisted within 2 seconds of completion and retained for at least 24 months And the record is retrievable via compliance reports and audit API by tap_test_id And subsequent Tap Test runs append new records without altering prior records
Configurable Thresholds Enforcement
Given an administrator has configured drift_threshold_seconds and freshness_window_minutes per device type When the Tap Test executes validation Then the system fetches and applies the current published threshold values And any change to thresholds is effective for new Tap Test runs within 60 seconds of publish And the diagnostic record stores the exact threshold values used at run time for traceability
Sample Reading Sanity Check
"As a caregiver, I want a quick sample value check so that I can trust that the device will produce valid readings during the visit."
Description

Performs a lightweight, non-clinical sample read from each required sensor to validate communications and plausible value ranges without storing it as patient data. Executes vendor-recommended dry-run or self-test commands where available; otherwise, captures a transient reading flagged as diagnostic-only and excluded from the clinical record. Applies device-specific plausibility rules and warm-up handling to avoid false failures. Ensures permissions and privacy safeguards, and returns a simple pass/warn/fail per device with remediation hints. Results are aggregated into the overall readiness signal.

Acceptance Criteria
Vendor Self-Test or Diagnostic-Only Read Selection and Non-Persistence
Given a device declares a vendor self-test capability via the device registry When the Tap Test runs Then the app issues the self-test command within 2 seconds of Tap Test start And classifies Pass if the response code equals OK, Warn if INCONCLUSIVE, Fail if ERROR or no response within the per-device timeout And no clinical record entry is created for this device as part of this Tap Test And if the device lacks self-test, the app captures one transient reading flagged diagnostic-only and excludes it from all patient timelines, exports, and clinical reports
Device-Specific Plausibility Rules and Warm-Up Window
Given a diagnostic reading is obtained from a required device When evaluating the reading Then the system loads and applies the device’s configured plausible range and units from the registry And during the configured warm-up window (default 5000 ms if unspecified) values outside range yield Warn with reason code "warming_up" And after the warm-up window, values outside range yield Fail with reason code "implausible_value" And values within range yield Pass And if units cannot be verified or mismatch expected units, classify Fail with reason code "unit_mismatch"
Permissions and Privacy Safeguards for Diagnostic Reads
Given required OS/hardware permissions for a sensor are not granted When the Tap Test starts Then the device is marked Fail with reason code "permission_denied" and a remediation step to grant permission Given permissions are granted When a diagnostic read or self-test occurs Then no PHI or patient identifier is stored or transmitted as part of the diagnostic event And the diagnostic event is logged only with session ID, device model, firmware version, result, reason code, and timestamp And no diagnostic data appears in the patient’s clinical record, vitals history, or clinical exports
Per-Device Result Classification with Remediation Hints
Given a device completes a diagnostic check When displaying its result Then the UI and API include: result (Pass/Warn/Fail), standardized reason code, human-readable message, and at least one actionable remediation hint for Warn/Fail And selecting a remediation hint launches the appropriate guided action (e.g., re-pair flow, open Bluetooth settings, move closer) and records completion of the action And if the result is Pass, no remediation hints are shown
Aggregation into Overall Readiness Signal
Given individual device results are available for all required devices When aggregating Tap Test outcomes Then overall readiness is Green if all devices Pass And overall readiness is Yellow if at least one device is Warn and none Fail And overall readiness is Red if any device is Fail And the aggregated banner displays total devices and counts by Pass/Warn/Fail and updates in real time as device results arrive
Performance, Concurrency, and Timeout Behavior
Given up to 6 required devices When the Tap Test runs Then device checks execute concurrently subject to OS limits And each device check has a configurable timeout (default 3 s; min 1 s; max 10 s) And any device exceeding its timeout is marked Fail with reason code "timeout" and a remediation hint to retry/reposition/re-pair And the entire Sample Reading Sanity Check phase completes in 15 seconds or less on supported reference devices under typical conditions And if connectivity is marginal but nonzero, the device is classified Warn with reason code "weak_signal" and a hint to move closer rather than immediate Fail
Guided Self-Heal Workflow
"As a caregiver under time pressure, I want clear steps to fix readiness issues so that I can resolve problems myself without calling support."
Description

Delivers contextual, step-by-step remediation when a check fails or warns. Generates targeted actions such as toggling Bluetooth, re-pairing, moving closer to a sensor, enabling network time, or updating firmware, and automates safe actions where possible. Provides estimated time-to-fix, clear visual affordances, and accessibility-compliant instructions with localization support. Allows retry of individual checks or a full rerun within the session, tracking attempts and outcomes. Offers escalation paths (in-app support contact, knowledge base) and logs self-heal steps for quality and training analysis.

Acceptance Criteria
Self-Heal Trigger for Failed or Warned Checks
Given a caregiver runs Tap Test prior to a visit And any check returns status "Fail" or "Warn" When the Guided Self-Heal Workflow starts Then a prioritized list of remediation steps specific to the failing check is displayed And each step shows a title, brief description, and estimated time-to-fix And visual state indicators clearly convey status and progress (red/amber/green) And the step set includes: toggle Bluetooth, re-pair device, move closer to sensor, enable network time sync, and check for firmware updates And the user can start the first recommended step with a single tap And potentially risky actions require explicit confirmation before proceeding And upon completion, the affected check auto-retries and updates its status And when all checks pass, the workflow shows a green "Ready" state within 2 seconds
Automated Safe Actions Execution
Given device capabilities support automation and required permissions are granted When the user selects an eligible remediation step Then the app executes the action without leaving the app (e.g., toggle Bluetooth, request time sync, initiate re-pair) And Bluetooth toggle completes within 5 seconds, time sync within 10 seconds, and re-pair attempt within 30 seconds And if elevated permission is needed, a native prompt is shown and the step resumes after grant And if automation is unavailable or fails, clear manual instructions are presented with no dead-ends And the action is logged with outcome (success/fail/cancel), start/end timestamps, and non-PII device identifiers
Accessibility and Localization Compliance
Given device accessibility features are enabled (screen reader, font scaling up to 200%, high-contrast) When Guided Self-Heal screens are displayed Then all actionable elements are focusable with a logical order and visible focus indicators And controls and messages have descriptive accessibility labels and hints And color is not the sole status cue; text and icons provide redundancy And text contrast ratio is at least 4.5:1 (3:1 for large text/icons) And content remains functional at 200% text size without overflow or truncation of critical controls And UI copy appears in the device locale for supported languages with correct pluralization and formats And right-to-left locales render properly And missing translations fall back to English and emit a missing-translation log
Retry and Full Rerun Controls with Attempt Tracking
Given at least one check has failed or warned in the current session When the user taps Retry on an individual check Then only that check is re-executed and its new result replaces the prior result without resetting other checks And the attempt counter increments and is displayed (e.g., Attempt 2 of 3) And a minimum 3-second cooldown is enforced between retries When the user selects Rerun All Then all checks re-execute in sequence within the same session And attempts and outcomes per check are timestamped and stored for the session And session data persists until exit or 15 minutes of inactivity
Escalation Pathways and Contextual Support
Given a check remains in Fail after the maximum recommended remediation attempts or the user taps Need help When escalation options are shown Then the app presents Contact Support and Open Knowledge Base And Contact Support pre-fills device model, OS version, app version, check IDs, last 10 self-heal steps, anonymized logs, and time zone And initiating Contact Support succeeds via in-app chat or email within 5 seconds or shows an offline-queued confirmation And Open Knowledge Base deep-links to an article matched to the failing check and error code And returning from support or KB preserves workflow state
Self-Heal Logging, Privacy, and Offline Resilience
Given the Guided Self-Heal Workflow is running When any remediation step starts or completes Then a log entry records session ID, pseudonymous user role ID, step ID, action type, outcome, duration, and error codes And PII/PHI is excluded or redacted per policy and logs are encrypted at rest and in transit And if offline, logs are queued securely and transmitted within 60 seconds of connectivity restoration And an admin can export session logs as CSV for a selected date range from the web console And logs older than the retention policy are purged automatically
Pass/Fail UI, Visit Gate, and Compliance Logging
"As an operations manager, I want clear pass/fail outcomes that can gate visit start and are logged automatically so that we maintain compliance and can prove device readiness during audits."
Description

Presents a concise green/yellow/red readiness banner with detail drill-down and integrates policy-based gating of the Start Visit action. Allows configurable overrides by role with reason capture and timestamp. Persists a signed diagnostic summary (checks run, results, durations, versions, and actor) into the visit record for audit readiness and includes it in CarePulse one-click compliance reports. Supports admin configuration of thresholds, device requirements by visit type, and gating rules. Implements accessible UI (color-blind-safe indicators, haptics) and exposes results via API/webhooks for downstream reporting.

Acceptance Criteria
Readiness Banner and Drill-Down Details
Given a caregiver launches Tap Test on a supported device, When diagnostics complete (<=15 seconds), Then a readiness banner shows the state (Green/Yellow/Red) with text label and icon. Given any user regardless of color vision, When the banner is displayed, Then color palette is color-blind-safe and contrast ratio >= 4.5:1. Given the device supports haptics, When the state is Yellow or Red, Then a haptic alert plays within 500 ms of state update. Given the caregiver taps the banner, When the detail view opens, Then it lists pairing, signal strength, last reading time, clock sync, and sample value check with value, threshold, pass/fail, per-check duration, and Tap Test version. Given at least one check failed, When the detail view is shown, Then context-specific self-heal steps are visible and actionable.
Policy-Based Start Visit Gating
Given gating rules exist for the visit type, When readiness is Red, Then Start Visit is disabled and displays a message naming the failing checks. Given gating rules set Yellow as warn-only, When readiness is Yellow, Then Start Visit is enabled but requires explicit confirmation before proceeding. Given gating rules require remediation for Yellow/Red, When all required checks pass in the same session, Then Start Visit becomes enabled; else remains disabled. Given no gating rules apply, When Tap Test completes, Then Start Visit state is unaffected.
Role-Based Override with Reason Capture
Given the user's role has override permission, When Start Visit is blocked by gating, Then an Override option is visible. Given the user selects Override, When a reason of at least 10 characters is entered, Then Start Visit is enabled and the override is recorded with timestamp, user id, role, reason, and gating rule id. Given an override is recorded, Then the override entry is immutable and visible in the visit timeline and compliance report. Given the user's role lacks override permission, Then no Override option is shown.
Signed Diagnostic Summary in Visit Record and Reports
Given Tap Test completes, Then save a diagnostic summary into the visit record containing checks run, results, values, thresholds, per-check durations, Tap Test/App versions, device id, actor id, start/end timestamps, and readiness state. Given the summary is saved, Then compute and store a server-side digital signature (HMAC-SHA256) over the summary payload; signature verification fails if the payload is modified. Given a one-click compliance report is generated, Then include the diagnostic summary and any override records verbatim. Given a save failure occurs, Then retry up to 3 times with exponential backoff; if all retries fail, surface an error to the user and do not allow Start Visit if gating would block.
Admin Configuration for Thresholds, Device Requirements, and Gating Rules
Given an admin edits Tap Test configuration, Then they can set per-visit-type thresholds: min signal strength, max clock drift, max last reading age, sample value bounds, and required devices. Given changes are saved, Then create a new version with editor, timestamp, and change summary; new configuration applies to new Tap Test sessions within 60 seconds. Given invalid values are entered, Then the system prevents save and displays validation messages identifying each field. Given required devices are configured, Then Tap Test evaluates presence/pairing accordingly and maps to Green/Yellow/Red per rules.
API/Webhooks for Tap Test Results
Given Tap Test completes, Then emit a webhook event tap_test.completed within 10 seconds with visit id, caregiver id, readiness state, failing checks, thresholds snapshot, summary signature, and timestamps. Given a consumer calls the REST API with a valid token and scope, Then GET /visits/{id}/tap-test returns the diagnostic summary payload. Given webhook delivery receives a non-2xx response, Then retry with exponential backoff for up to 24 hours, using signed requests and idempotency keys. Given webhooks are disabled in admin settings, Then no webhooks are sent while the API remains available.

Signal Timeline

A live timeline per client that visualizes device heartbeats, gaps, RSSI strength, and firmware version. Highlights chronic dead zones and trend windows, with placement tips for hubs or extenders. Gives IoT Integrators and RNs fast insight to fix root causes, not just symptoms.

Requirements

Live Heartbeat Ingestion & Timeline Rendering
"As an IoT integrator, I want to view live device heartbeats on a per-client timeline so that I can quickly assess connectivity health and react to emerging issues."
Description

Implement near real-time ingestion and storage of device heartbeat telemetry per client and device, rendering a mobile-first, horizontally scrollable timeline that auto-refreshes without page reloads. Each device appears as a distinct lane with color-coded states (healthy, degraded, offline) and time-aligned markers for heartbeats. Support WebSocket/SSE for live updates, 24–72h time windows, pinch-to-zoom, and quick range presets (Last 2h, 24h, 7d). Integrate with the client profile in CarePulse, respecting existing auth scopes and tenancy. Persist user-selected view settings per user/device. Optimize for low-latency rendering on mid-range mobile devices via virtualized drawing (e.g., canvas/WebGL) and batched updates.

Acceptance Criteria
Live update renders heartbeat marker within 3 seconds
Given a user with access opens a client's Signal Timeline on a supported mobile browser And the client negotiates WebSocket, or falls back to SSE if WebSocket fails When a device heartbeat event with timestamp T is received by the client stream Then a heartbeat marker is rendered on the correct device lane at time T within 3 seconds (p95) without page reload And UI updates are batched to no more than 10 times per second And if the connection drops, the client auto-reconnects within 5 seconds (p95) and resumes from the lastEventId with no gaps or duplicate markers
Scrollable, zoomable timeline with quick range presets
Given the timeline defaults to showing the Last 24h range When the user selects a quick preset (Last 2h, 24h, 7d) Then the visible time span matches the selected preset within ±1 second tolerance and only data within that window are queried When the user performs a pinch-to-zoom gesture Then the time window scales smoothly between 2h and 7d limits and markers reflow without visual overlap beyond 1 marker width And horizontal scrolling pans time continuously without page scroll hijacking And live auto-refresh continues without interrupting the current zoom/scroll position
Per-device lanes with state colors and aligned markers
Given device state thresholds are configured (healthy_threshold_sec, offline_threshold_sec) per device type When the gap between successive heartbeats is computed Then the device lane state is colored Healthy when last gap ≤ healthy_threshold_sec, Degraded when healthy_threshold_sec < gap ≤ offline_threshold_sec, and Offline when gap > offline_threshold_sec And gaps exceeding offline_threshold_sec are visually indicated as gap segments on the lane And each heartbeat marker is time-aligned across device lanes and reveals timestamp, RSSI (dBm), and firmware version on tap/hover
Persist and restore user view preferences per user/device
Given a user adjusts view settings (selected range preset or custom zoom level, horizontal scroll position, visible device lanes/order) for a client When the user reloads the page or returns to the same client's Signal Timeline from another session or device while logged into the same account Then the previously selected view settings are restored for that user and client And a Reset to Defaults action clears the stored settings and restores the default view on next load
Secure, scoped access within CarePulse tenancy and auth
Given a signed-in user with organization and client-scoped permissions When the user requests timeline data or subscribes to live updates for a client Then only data for devices within that client's scope are returned/streamed And attempts to access another tenant's or client's timeline return HTTP 403 and no identifying data are leaked And live channels require valid scoped authorization and are terminated when tokens expire or are revoked
Mobile performance: smooth rendering via virtualization
Given a mid-range device (Android Pixel 4a or equivalent, iPhone SE 2020) and a timeline window of 72h with up to 6 device lanes and 10,000 total heartbeat markers When the user pans or pinch-zooms the timeline Then median frame rate is ≥ 50 FPS and 95th percentile frame time ≤ 32 ms And time-to-first-render of the initial view is ≤ 2 seconds (p95) And main-thread CPU during interactions stays ≤ 60% (p95) and additional memory usage remains ≤ 150 MB And rendering uses a virtualized canvas/WebGL layer with ≤ 200 DOM nodes in the timeline region at any time
Ingestion and storage correctness and idempotency
Given heartbeats may arrive late, out of order, or retried by the device/integrator When heartbeats are ingested for a device (client_id, device_id, timestamp, sequence) Then records are persisted within 2 seconds (p95) of receipt and deduplicated by an idempotency key (device_id+timestamp or sequence) And timeline queries return heartbeats strictly ordered by timestamp with no duplicates, and late arrivals are inserted correctly on subsequent fetch/refresh And multi-tenant isolation is enforced at storage and query layers so no cross-tenant records are returned
RSSI Strength Visualization with Threshold Bands
"As an RN care coordinator, I want signal strength visualized with clear thresholds so that I can correlate visit issues with connectivity quality at a glance."
Description

Normalize and display RSSI values along the timeline with a color gradient and labeled threshold bands (Excellent, Good, Fair, Poor) configurable per device class. Provide tooltips showing min/avg/max over selected windows, and smoothing options to reduce noise while preserving drops. Surface brief annotations for antenna type or placement metadata if available. Ensure units (dBm) and sampling cadence are consistent, and handle missing or stale readings gracefully. Expose a settings panel to adjust thresholds at tenant or site level, and store changes for auditability.

Acceptance Criteria
Render normalized RSSI gradient with labeled threshold bands per device class
Given a device stream with RSSI readings in dBm and a configured device class When the Signal Timeline is rendered Then the RSSI series is colored with a continuous gradient mapped to the device class threshold bands And band labels (Excellent, Good, Fair, Poor) are displayed with color chips and numeric dBm ranges And each plotted point/segment is assigned to the correct band based on its value and active thresholds And the y-axis and legend display units as "dBm" And a sampling cadence label reflects the dataset cadence within ±10% using s/min/h units appropriately
Show min/avg/max RSSI in tooltip for a selected window
Given the user drag-selects a time window on the timeline for a device stream When the selection is active Then a tooltip shows Min, Avg, and Max RSSI for only the selected window in dBm And the tooltip indicates whether values reflect Raw or Smoothed data based on the current smoothing setting And the tooltip includes sample count and the exact start and end timestamps in the user’s timezone And statistics update within 150 ms when the selection changes
Smoothing options reduce noise while preserving drops
Given smoothing levels Off, Low, Medium, High are available in the timeline controls When the user switches smoothing levels Then the series updates to use: Off = raw; Low = 3-point moving average; Medium = 5-point moving average; High = 7-point moving average And when a sudden drop ≥6 dB occurs within ≤10 seconds in the raw data Then the smoothed series reflects a drop of at least 80% of the raw magnitude within one sample of the event And the active smoothing level is visibly indicated in the UI
Display antenna type and placement annotations when metadata exists
Given the device has antenna_type and/or placement metadata When the timeline is viewed Then an annotation icon appears next to the device name; on hover, a tooltip shows antenna type and placement (truncated to 120 chars with an option to expand) And if metadata changed at recorded timestamps, markers appear at those times with short labels (e.g., "Antenna swapped") And annotations do not occlude data and can be toggled on/off from the legend
Graceful handling of missing or stale RSSI readings
Given the expected sampling cadence is known When there is a gap longer than 2× the cadence Then the gap is rendered as a transparent break with a dashed connector and a "No signal" hover tooltip And gaps are excluded from min/avg/max calculations for selections spanning the gap And if the most recent reading is older than max(3× cadence, 5 minutes) Then a "Stale" badge appears near the device name with the timestamp of the last reading And rendering proceeds without errors when null or out-of-order timestamps are present
Configure threshold bands at tenant and site level
Given a user with Admin role opens the Signal Timeline settings panel When editing threshold bands for a device class at the tenant level Then inputs accept only numeric values between -120 and 0 dBm and enforce monotonic ordering Excellent > Good > Fair > Poor And saving tenant-level thresholds applies them to all sites within 5 seconds unless a site-level override exists And when editing thresholds at a site, changes affect only that site and override tenant defaults for that device class And the timeline legend and band overlays reflect new thresholds immediately after save without a full page reload And invalid configurations disable Save and display inline error messages
Audit log of threshold changes
Given any threshold configuration change is saved When the save completes Then an audit record is created capturing timestamp, user ID, scope (tenant/site), site ID if applicable, device class, previous thresholds, new thresholds, and an optional reason And audit records are immutable and versioned; restoring a previous version writes a new record and applies those thresholds And the audit log is viewable in the settings panel with filters by date range, user, scope, site, and device class And audit timestamps display in the tenant’s timezone with ISO 8601 format on hover
Gap Detection & Outage Highlighting
"As an operations manager, I want gaps in device communication highlighted so that I can proactively address outages and maintain on-time, compliant visits."
Description

Detect and mark intervals where expected heartbeats are missing beyond a configurable tolerance, highlighting gaps directly on the timeline with start/end timestamps and duration badges. Compute time since last heartbeat and show an "Offline since" banner for prolonged outages. Support device-specific heartbeat cadences, ignore scheduled maintenance windows, and suppress duplicate noise during known network incidents. Provide a summary widget aggregating total downtime and mean time to recovery over the selected range. Expose simple API endpoints to fetch gap events for downstream reporting.

Acceptance Criteria
Gap Event Detection by Configurable Tolerance
- Given a device with expected_heartbeat_interval (EHI) and tolerance_missed_intervals (TMI), when elapsed time since the last heartbeat exceeds EHI * TMI, then create a gap event with start_time = last_heartbeat_time + EHI and status = "open". - When a subsequent heartbeat arrives, then set end_time = that heartbeat timestamp, status = "closed", and duration_seconds = end_time - start_time. - Given a device-specific cadence override exists, then use the device override; else use device-type default; else use organization default; cadence changes take effect within 5 minutes and do not retroactively alter closed gaps. - Heartbeats within ±5% of EHI are not counted as misses (jitter tolerance). - Do not produce overlapping or duplicate gap events; merge adjacent gaps separated by less than one EHI into a single gap. - Event creation is idempotent under reprocessing; the same logical gap is not duplicated. - All stored timestamps are UTC ISO 8601 with millisecond precision.
Timeline Highlighting with Timestamps and Duration Badges
- Given a gap event intersecting the selected date range, when the timeline renders, then display a red highlight from start_time to end_time (or to now if open). - Show start and end timestamps on hover and a duration badge in hh:mm:ss; for open gaps, show a live-updating duration. - Gaps that intersect the selected range are visually clipped to the range bounds without altering stored event times. - Adjacent or overlapping gaps are presented as a single contiguous highlight with a single duration badge. - Timeline labels use the viewer's timezone; tooltips include both local time and UTC.
Offline Since Banner for Prolonged Outages
- Given an open gap and a configured offline_banner_threshold_seconds, when duration_seconds >= threshold, then display an Offline since {start_time_local} ({elapsed}) banner on the client detail view. - Update the elapsed duration at 60-second intervals; format as mm:ss for durations < 1 hour and hh:mm:ss for >= 1 hour. - When a heartbeat closes the gap, then hide the banner within 10 seconds. - Do not display the banner for devices inside scheduled maintenance or during globally silenced incident windows. - When no open gap exists, display Last heartbeat {elapsed} in the device header with ≤1-second resolution up to 60 seconds, then minute granularity.
Excluding Scheduled Maintenance Windows from Gap Logic
- Given a maintenance window [mw_start, mw_end] for a device or client, when evaluating heartbeats, then do not open gap events fully contained within the window. - If a potential gap overlaps a maintenance window partially, then clip the gap to exclude the maintenance period (start at mw_end if it began before; end at mw_start if it ended after); if the remaining segment is ≤ EHI, do not create a gap. - Maintenance-covered time is excluded from total downtime and MTTR calculations. - The timeline does not render gap highlights within maintenance windows. - The API does not return gap segments fully covered by maintenance; partially overlapping gaps are returned with clipped start/end.
Noise Suppression During Known Network Incidents
- Given an active network incident window [i_start, i_end], when multiple devices miss heartbeats, then for each device create at most one gap covering the intersection of [i_start, i_end] with the device's actual outage and tag cause = "network_incident". - Suppressed micro-gaps within the incident window do not generate additional events or alerts. - Summary metrics count incident-covered downtime once per device; duplicate micro-gaps are not counted. - When an incident window ends and the device remains offline, then keep the gap open beyond i_end and close it upon recovery.
Downtime and MTTR Summary Widget over Selected Range
- Given a selected date range, when gaps are computed with maintenance exclusions and incident suppression, then display: - total_downtime = sum of durations of all gap events intersecting the range (open gaps clipped at range end), excluding maintenance-covered time. - gap_count = number of unique gap events intersecting the range after suppression/merging. - mttr = average duration of closed gaps that started within the range; if none, display "N/A". - Values update within 1 second when the date range or filters (client, device) change. - Zero-state: total_downtime = 0, gap_count = 0, mttr = "N/A" when no gaps exist. - Metrics respect active filters and exclude maintenance time by design.
Gap Events API for Downstream Reporting
- Given a valid request GET /v1/gaps?clientId={id}&from={ISO_UTC}&to={ISO_UTC}[&deviceId={id}][&limit][&cursor], when authorized, then respond 200 with: - items: [{id, clientId, deviceId, start, end|null, durationSeconds, status: "open"|"closed", cause: "normal"|"network_incident", maintenanceExcludedSeconds}] - page: {limit, nextCursor} - Only gaps that intersect [from, to] are returned; start/end are full event times (not clipped); durationSeconds excludes maintenance-covered periods. - Stable ordering by start desc then id; response time ≤ 500 ms for ≤ 5,000 gaps. - 400 for invalid parameters, 401 for missing/invalid auth, 404 for unknown clientId. - Contract tests verify schema and application of maintenance exclusion and incident suppression rules.
Firmware Version Overlay & Anomaly Flags
"As an IoT integrator, I want firmware versions overlaid on the timeline so that I can spot outdated software that may be causing instability."
Description

Overlay firmware versions as inline labels and change markers on the timeline, indicating when a device upgraded or downgraded. Cross-check against a firmware catalog to flag deprecated or vulnerable versions and annotate known issues. Allow filtering the timeline by firmware version and provide quick links to device details for remote update actions where available. Ensure version metadata is cached and displayed even if signal data is sparse. Include audit logs for version state changes.

Acceptance Criteria
Inline Version Labels & Change Markers on Timeline
Given a client's Signal Timeline is opened for a date range containing at least one device When the timeline finishes loading Then each device lane displays the current firmware version as an inline label aligned to the correct time segment And version change markers appear at the exact server-recorded change timestamps with a precision of ±5 seconds And marker tooltips show the previous and new version (e.g., “Upgraded 1.2.3 → 1.3.0”) and the timestamp in the user’s timezone
Flag Deprecated/Vulnerable Versions from Firmware Catalog
Given the firmware catalog contains entries marked as Current, Deprecated, or Vulnerable with optional known-issue notes When a device on the timeline is running a Deprecated or Vulnerable version Then a visible flag appears adjacent to the version label using distinct color/shape per status And the tooltip includes the catalog status and any known-issue note And the flag state reflects the latest catalog on page load or manual refresh without requiring app restart
Filter Timeline by Firmware Version
Given the user opens the firmware filter on the Signal Timeline When the user selects one or more firmware versions and applies the filter Then only timeline segments and events for devices running the selected versions remain fully opaque; others are hidden or dimmed And a results count and active-filter pill are shown And clearing the filter restores the full timeline within 1 second
Quick Link to Device Details and Remote Update
Given a version label or status flag is visible on a device lane When the user taps the label/flag Then the device details view opens showing device ID, model, current version, and available update target(s) And if remote update is supported for that device, a primary action to initiate update is enabled; otherwise the action is disabled with a reason tooltip And the deep link encodes device and version context so the details view reflects the same device without additional selection
Display Version Metadata During Sparse Signal Periods
Given a device has sparse or missing heartbeats in the selected time window but has known last-seen firmware metadata When the timeline is rendered Then the version label uses cached metadata to display the last known version for the appropriate time span with a visual indicator for inferred data And no placeholder “Unknown” is shown unless no firmware metadata exists for the device in the last 30 days And when new heartbeats arrive with version info, the timeline reconciles and updates labels within 5 seconds
Audit Logging for Firmware State Changes
Given a firmware version change is detected from telemetry or an admin-triggered update When the change is processed by the platform Then an immutable audit log entry is created capturing device ID, previous version, new version, detected timestamp (UTC), source (telemetry/admin/API), actor (if any), and correlation ID And audit entries are queryable by device and date range and exportable as CSV And timeline markers link to their corresponding audit log entry via ID
Differentiate Upgrades vs Downgrades with Accurate Annotations
Given a device undergoes a firmware version change within the selected range When the new version is numerically lower than the previous version per semantic versioning rules Then the timeline marker is styled as a downgrade (distinct icon/color and “Downgraded” verb) And when the new version is higher, the marker is styled as an upgrade with “Upgraded” verb And pre-release/build metadata (e.g., -beta, +build) is correctly parsed and reflected in the tooltip without misclassifying the direction of change
Dead Zone Detection & Trend Windows
"As a field RN, I want chronic dead zones highlighted with trends so that I can plan mitigations or adjust visit workflows before issues recur."
Description

Analyze historical RSSI and gap patterns to identify chronic low-signal areas and recurring outage windows for each client location. Surface these as shaded trend windows on the timeline with confidence scores, recurrence patterns (e.g., weekdays 6–9 PM), and contributing factors (low RSSI, high packet loss). Provide a drill-down view summarizing last 7/30 days with heatmaps and statistics (percent time below threshold, longest outage). Support geotagged devices and room-level tags when available; otherwise infer from device assignment and visit schedules. Allow export of a brief trend summary for stakeholders.

Acceptance Criteria
Chronic Dead Zone Detection (RSSI-Based)
- Given ≥14 days of telemetry, the system identifies a chronic dead zone if RSSI < -80 dBm for ≥20% of sampled time within a consistent 30-minute clock window on ≥3 distinct days. - RSSI threshold and window size are configurable; defaults are -80 dBm and 30 minutes. - Identified dead zones are rendered as shaded windows labeled "Chronic low signal" on the client’s timeline. - Each window shows a confidence score (0–100) derived from support and consistency; default visibility requires confidence ≥60. - Window start/end reflects modal bounds with ±5 minutes tolerance.
Recurring Outage Window Identification
- A recurring outage window is detected when packet loss > 5% or device offline gaps ≥5 minutes occur in the same clock window on ≥3 days within the last 30 days. - Recurrence is summarized as human-readable patterns (e.g., "Weekdays 18:00–21:00", "Daily 02:00–03:00"); timing precision ±5 minutes. - Each window displays confidence (0–100) and support count (days/occurrences); default visibility requires confidence ≥60. - Windows are computed in the client’s local timezone and respect DST transitions. - Windows with total affected duration < 30 minutes over 30 days are suppressed to reduce false positives.
Timeline Visualization and Interaction
- Shaded trend windows (dead zones and outages) render on the Signal Timeline without obscuring event markers; overlapping windows stack with distinct shading and borders. - Hover/tap tooltip reveals: window type, recurrence label, start–end time, confidence score, support count, contributing factors (top two with percentages), and affected devices count. - Color coding follows severity: low-signal windows appear amber and outage windows red; contrast meets WCAG AA (≥4.5:1) in light and dark themes. - Clicking a window opens the drill-down view pre-filtered to the window’s time range and location (if applicable).
Contributing Factors Attribution
- For each detected window, the system computes contributing factors across metrics: low RSSI (< -80 dBm), high packet loss (> 5%), device offline (no heartbeat ≥5 minutes), and power loss (device power status off). - The top two factors by weighted contribution are displayed with percentage shares totaling ≤100%. - If no factor contributes ≥10%, the window lists factor as "Unknown". - Factor attribution completes within 2 seconds for a single client query and remains consistent between timeline and drill-down views.
Drill-Down Heatmaps and 7/30-Day Statistics
- The drill-down view provides heatmaps for the last 7 days and last 30 days with axes: hour of day (0–23) × day of week; color encodes percent time RSSI below threshold or device offline. - Summary includes: percent time below threshold, percent time offline, longest outage duration, mean and median RSSI, 95th percentile packet loss, count of affected devices, and impacted rooms/geotags. - Metrics are computed at 1-minute granularity or finer; toggling between 7 and 30 days updates visualizations within 500 ms. - Values in drill-down match the timeline selection filters (device/location/window) within ±1 event tolerance.
Location Handling: Geotags, Room Tags, and Inference
- When geotags or room tags exist, trend windows and heatmaps group results by location and display room names or lat/long rounded to 5 decimals. - When absent, the system infers location from device assignment and visit schedules; inferred labels include a confidence score (0–100) and an "(inferred)" suffix. - Conflicts (device appearing in multiple rooms within 60 minutes) resolve via majority-time rule; on ties, no room is assigned and the item is labeled "Unassigned". - Location filters in the drill-down update counts, heatmaps, and statistics consistently within 500 ms.
Export Brief Trend Summary
- From the timeline or drill-down, selecting Export generates a brief trend summary (PDF and CSV) for the current client and applied filters within 5 seconds. - The export includes: client name, date range, timezone, top 3 recurring windows with recurrence labels, confidence scores, contributing factors, 7/30-day percent time below threshold, longest outage, and counts of affected devices/locations. - Exports are audit-ready: include report version, generated timestamp, and unique report ID; files are UTF-8 (CSV) and ≤1 MB each. - Exported values match on-screen figures within rounding tolerance (±1%).
Placement Recommendations for Hubs/Extenders
"As an IoT integrator, I want actionable placement tips for hubs and extenders so that I can resolve root causes faster and reduce repeat site visits."
Description

Generate context-aware placement tips when chronic weak signal or frequent gaps are detected, recommending hub/extender relocation or antenna adjustments. Combine RSSI gradients, device density, and building metadata (if provided) to suggest better placements and expected impact (e.g., +8–12 dBm). Provide step-by-step guidance, validation checks (retest RSSI), and a feedback loop for users to confirm outcomes to refine future recommendations. Clearly indicate assumptions and limits of recommendations to avoid overconfidence.

Acceptance Criteria
Chronic Weak Signal Triggers Placement Tips
Given a client’s Signal Timeline contains at least 72 hours of device telemetry And at least 2 devices under the client show median RSSI ≤ -85 dBm for ≥ 30% of samples in the last 24 hours Or the client has ≥ 5 connectivity gaps longer than 2 minutes across devices in the last 24 hours When the user opens the Signal Timeline or the background analyzer runs hourly Then the system generates a single consolidated placement recommendation within 5 seconds And the recommendation identifies affected zones and device count with last-seen timestamps And no recommendation is generated if thresholds are not met And no more than one active recommendation exists per location cluster
Context-Aware Recommendation Uses Multi-Factor Inputs
Given building metadata (e.g., floor count, square footage, construction materials) may or may not be available When generating a placement recommendation Then the algorithm uses at least two of the following inputs: RSSI gradient across locations, device density, building metadata (if present) And it outputs a proposed new hub/extender position within a 3 m radius and/or antenna azimuth within ±15° And it estimates expected median RSSI improvement as a range (e.g., +8 to +12 dBm) with a confidence score between 0.0 and 1.0 And if building metadata is absent, the recommendation explicitly lists assumptions and reduces confidence by at least 0.1
Actionable Step-by-Step Placement Guidance
Given a placement recommendation is displayed to the user When the user expands recommendation details Then the UI shows a numbered 3–7 step procedure including required tools, estimated time (in minutes), safety notes, and a rollback step And steps reference a floor plan or placement map overlay indicating the suggested location/orientation And the guide includes a Start Validation action to begin post-change checks
Post-Change RSSI Validation and Auto-Recheck
Given the user applies the recommendation and taps Start Validation or the system detects a hub/extender relocation greater than 3 m When validation begins Then the system collects telemetry from affected devices for 15 minutes (configurable) And it computes success if median RSSI improves by ≥ 8 dBm and connectivity gap rate decreases by ≥ 50% versus the prior 24-hour baseline And it marks the recommendation as Pass, Fail, or Partial within 2 minutes of completing data collection And for Fail or Partial, the system proposes up to 2 alternate placements ranked by confidence
User Feedback Loop to Confirm Outcomes
Given a validation result is available When the user is prompted for outcome Then the user can select Worked, Partially Worked, or Didn’t Work and optionally enter a 140–500 character note And submission requires no more than 2 taps/clicks And the system logs the outcome and updates future recommendation confidence via online learning And the user can dismiss with a reason (e.g., Not Enough Time), which suppresses prompts for 24 hours
Transparent Assumptions and Confidence Boundaries
Given a recommendation is shown When data sufficiency is below thresholds (history < 24 hours or number of devices < 3) Then an Assumptions & Limits banner is displayed listing missing data and model caveats with confidence ≤ 0.6 And expected impact is shown as a range with a 90% interval; no single-point estimate is presented And Apply Recommendation remains disabled until the user acknowledges the banner
Auditability and Compliance Reporting
Given any recommendation lifecycle event occurs (generated, viewed, applied, validated, feedback recorded) When the event is saved Then the system stores timestamp, user ID, hashed device IDs, firmware version, algorithm version, thresholds used, and outcome And events appear in Signal Timeline > Recommendation History within 10 seconds And the client-level audit report exports these fields to CSV and PDF on demand
Role-based Filters, Zoom Controls, and Shareable Snapshot
"As an RN or integrator, I want role-appropriate filters, easy zooming, and a shareable snapshot so that I can tailor my view and communicate findings quickly."
Description

Deliver tailored default views and filters for IoT Integrators vs RNs (e.g., integrators see device lanes and technical metrics by default; RNs see simplified health states). Provide intuitive pinch-to-zoom, pan, and time-range presets, plus quick filters (device, room, firmware, state). Enable one-click generation of a shareable snapshot (PNG/PDF) of the current timeline view with captioned context (time range, filters, client), adhering to CarePulse access controls and excluding PHI beyond device identifiers permitted by role. Store recently used filter sets per user for rapid reuse.

Acceptance Criteria
Default Role-Based Timeline Views
- Given a user with role "IoT Integrator" opens a client’s Signal Timeline for the first time in the last 90 days, When the view loads, Then device lanes (per device), heartbeat markers, RSSI graph, and firmware badges are visible by default, and health-state badges are collapsed. - Given a user with role "RN" opens the same timeline for the first time in the last 90 days, When the view loads, Then a simplified health-state view (OK/Warning/Gap) is visible by default with device technical metrics hidden behind a toggle. - Given a user changes their role and reloads the timeline, When the page loads, Then the default view matches the new role’s defaults. - Given any role has manually toggled visibility of technical overlays during a session, When the user clicks "Reset to Role Defaults", Then the timeline returns to the role’s default configuration within 300ms.
Quick Filters for Device, Room, Firmware, State
- Given a loaded timeline, When the user applies a filter by device, Then only matching device lanes render and an Active Filters chip shows the device count. - Given multiple filters across categories (device, room, firmware, state), When applied, Then the filter logic is AND across categories and OR within a category (e.g., room=A OR room=B) and the result count updates in <300ms. - Given a firmware filter, When the user enters a semantic version range (e.g., ">=1.4.0 <1.6.0"), Then only events/devices with firmware in that range display. - Given a state filter results in no matches, When applied, Then the UI displays "No events in selected range" and a Clear Filters action within 300ms. - Given filters are cleared, When the user taps Clear Filters, Then the timeline returns to the role-based default view. - Performance: Applying any single filter executes and renders the result in ≤500ms for up to 200 devices and 10,000 events on a mid-tier mobile device.
Pinch-to-Zoom, Pan, and Time Presets
- Given a touch-capable device, When the user performs a two-finger pinch on the timeline, Then the time axis zooms centered on the gesture focal point with a minimum granularity of 1 minute and a maximum window of 30 days. - Given any device, When the user pans horizontally (one-finger drag on touch or click-drag on pointer), Then lanes remain vertically aligned with ≤1px drift and no vertical scroll is introduced. - Given time presets (1h, 6h, 24h, 7d, Custom), When a preset is selected, Then the visible range updates to that duration relative to the client’s timezone and start/end timestamps display in the client’s local time format. - Performance: 95th percentile interaction latency for zoom and pan is ≤100ms per frame on a mid-tier mobile device.
Shareable Snapshot (PNG/PDF) with Context Caption
- Given a user with access to a client’s timeline, When the user taps "Share Snapshot", Then options to export PNG and PDF are shown with a preview thumbnail. - When the user confirms export, Then the file generates within 3 seconds at 2x device pixel ratio (PNG) or vector content (PDF), capturing exactly the current viewport (time range, zoom, visible lanes, overlays) and excluding off-screen content. - The snapshot caption includes: client name/ID, time range (ISO 8601 start–end with timezone), active filters, user name/ID, generation timestamp, and CarePulse app version. - Access control: Export is blocked for users without client access with an error message; successful exports are logged with user ID, client ID, time range, and filter hash. - Failure handling: If generation fails, Then no file is downloaded, an error toast is shown, and an audit log entry records the failure reason.
PHI Redaction and Role-Based Field Whitelisting
- Given an RN role viewing the timeline or generating a snapshot, Then no PHI (patient name, address, DOB, free-text notes) is rendered; only device aliases and allowed identifiers per RN policy appear. - Given an IoT Integrator role, Then device serials, RSSI, firmware, and heartbeat details may render, but no PHI is displayed in UI or snapshot. - The snapshot renderer uses a role-based whitelist of fields; automated tests verify that disallowed fields are absent by asserting a zero-match for PHI tokens in the rendered output. - Attempting to export while a disallowed overlay is toggled results in automatic exclusion of disallowed elements from the snapshot without altering the on-screen view.
Recently Used Filter Sets Persisted per User
- Given a user applies any combination of filters and a time preset or custom range, When they apply or modify filters, Then the system records the resulting filter set as a recent item (deduplicated by parameters). - The system stores up to 10 recent filter sets per user per client, sorted by most recent use, persisted across sessions and devices. - Given a stored recent set, When the user selects it from "Recent Filter Sets", Then the timeline applies the exact filters and time range in ≤500ms and the Active Filters chips reflect the selection. - Given a user deletes a recent set, When they confirm deletion, Then it is removed from their list immediately without affecting other users.

Hot Swap

Guided device replacement that transfers pairing, calibration, and documentation to a backup sensor in under a minute. Auto-updates visit notes and chain-of-custody logs, and notifies stakeholders of the swap. Keeps care moving without IT tickets while preserving compliance.

Requirements

Quick Swap Wizard UX
"As a caregiver, I want a guided, step-by-step swap flow so that I can quickly replace a sensor without IT help and keep care on schedule."
Description

A mobile-first, step-by-step wizard that guides caregivers through replacing a failing or expired sensor with a backup in under a minute. The flow includes scanning the backup device via QR/NFC/BLE, confirming the associated patient and active visit, authenticating the caregiver (PIN/biometric), and presenting real-time checks (connectivity, battery, firmware, compatibility). It provides clear progress indicators, inline tips, and recoverable error handling (retry scan, manual entry, cancel/rollback). The wizard operates offline by caching the swap event and deferring server updates until connectivity resumes, with conflict resolution rules to prevent duplicate or out-of-order swaps. Accessibility, localization, and role-based access are supported. Telemetry captures completion time and failure reasons to continuously improve guidance and reliability.

Acceptance Criteria
Happy Path: Swap via QR/NFC/BLE in Under 60 Seconds
Given a caregiver is in an active visit for the correct patient and a failing sensor is detected And a provisioned backup sensor is available When the caregiver launches the Quick Swap Wizard and scans the backup via QR, NFC, or BLE discovery Then the wizard validates the backup device is eligible (owned by agency, not already assigned) within 5 seconds And displays a step-by-step progress indicator throughout And transfers pairing, calibration, and documentation to the backup device And confirms the patient and active visit before finalizing And updates visit notes and chain-of-custody logs automatically And notifies configured stakeholders of the swap And shows a success confirmation And the total elapsed time from wizard start to success is 60 seconds or less on a standard connection
Authorized Caregiver Authentication and Access Control
Given a caregiver initiates a swap for an active visit When the wizard requests authentication Then biometric authentication is accepted if enabled; otherwise a PIN is required And the PIN entry is masked and enforces the configured policy (e.g., 4–6 digits) And three consecutive failed PIN attempts trigger a 60-second lockout with clear messaging And only users with Device Swap permission and assigned to the patient’s visit can proceed And unauthorized attempts are blocked with a "Not authorized to swap device for this visit" message And all authentication attempts (success/failure) are timestamped and recorded in the audit log
Real-Time Device Health and Compatibility Checks
Given a backup device has been scanned When the wizard runs pre-swap checks Then connectivity to the device is established within 5 seconds or a specific connectivity error is shown And battery level is verified to be >= 20%; otherwise the swap is blocked with guidance to charge or select another device And firmware version meets the minimum supported version; if not, the user is prompted to update or the swap is blocked And device model and calibration profile are verified compatible with the patient’s plan of care And any failing check presents a clear, inline tip with a single-tap resolution path
Recoverable Errors: Retry, Manual Entry, and Rollback
Given a scan attempt fails or reads an invalid code When the caregiver taps Retry Then the scanner restarts within 1 second and resumes listening When the caregiver selects Manual Entry Then the wizard accepts a 12-character device ID with checksum validation and blocks submission on invalid input with inline error messaging When the caregiver taps Cancel before confirmation Then all changes are rolled back, the original device remains active, and no partial updates are sent to the server And a non-intrusive message confirms the rollback was successful
Offline Operation with Deferred, Conflict-Safe Sync
Given the device has no internet connectivity When the caregiver completes the swap wizard Then the swap event (user ID, visit ID, old/new device IDs, timestamps) is persisted locally with status Pending Sync And visit notes and chain-of-custody are updated locally to reflect the swap When connectivity is restored Then the pending swap is synchronized to the server within 30 seconds And synchronization uses an idempotency key to prevent duplicate swaps And if the server acknowledges an existing newer swap for the same visit or device, the local event is marked Conflicted and the caregiver is prompted to keep server state or retry
Accessibility and Localization Compliance
Given the caregiver uses system accessibility features When navigating the Quick Swap Wizard Then all actionable elements have descriptive accessibility labels and logical focus order And text supports Dynamic Type up to 200% without truncation or loss of functionality And color contrast for text and interactive elements is at least 4.5:1 And all non-text status changes are conveyed with accessible notifications And the wizard is localized in supported languages (e.g., English and Spanish), including dates and numbers formatted per locale
Telemetry: Duration and Failure Reason Capture
Given any Quick Swap Wizard session starts When steps are completed, succeeded, or failed Then telemetry records start_time, end_time, duration_ms, user_id, visit_id, old/new device IDs, completion_status, failure_reason_code (from a controlled list), and step_at_failure And telemetry events are stored locally when offline and uploaded within 5 minutes of connectivity or app close And 95% of successful swaps have duration_ms <= 60000 recorded in telemetry And no sensitive PII (beyond user_id and device identifiers) is captured in telemetry payloads
Pairing & Patient Association Transfer
"As a caregiver, I want the new sensor to automatically take over the patient’s session so that monitoring continues without data loss or extra steps."
Description

Securely transfers active device pairing and patient/visit association from the old sensor to the backup sensor, ensuring a seamless data stream with no duplicate or missing records. The process handles BLE pairing, key exchange, and device registry updates, then gracefully unpairs or revokes the old sensor while bringing the new sensor online. It timestamps the swap event, stops the old data feed, starts the new feed, and guarantees idempotency if a retry occurs. APIs update the server-side device-to-patient mapping, and the mobile client persists state for recovery after app restarts or network loss. All changes are auditable and respect role permissions.

Acceptance Criteria
BLE Pairing and Secure Key Exchange Under 60 Seconds
Given an authenticated user with a permitted role and an active visit with an online old sensor And a powered backup sensor within BLE range When the user initiates Hot Swap to the backup sensor Then the app discovers the backup sensor within 10 seconds And completes BLE secure bonding and key exchange successfully And verifies device identity against the device registry And completes the pairing and handshake in under 60 seconds end-to-end And encrypts subsequent traffic using the negotiated session keys
Zero-Gap, Zero-Duplicate Data Stream Cutover
Given an active continuous data feed from the old sensor When the swap is confirmed Then the old feed is stopped within 2 seconds And the new feed starts within 5 seconds of the stop And the combined data timeline has no gaps and no duplicates as validated by sequence IDs And the maximum timestamp drift between the last old record and the first new record is less than 2 seconds And downstream consumers receive a single, ordered stream for the visit
Idempotent Swap with Safe Retries
Given a swap operation has been initiated with a unique correlation ID When the same swap is retried due to timeout or app restart within 5 minutes Then exactly one swap event is recorded in the audit log And the device-to-patient mapping reflects the backup sensor only once And no duplicate notifications or registry updates are emitted And the operation completes without error if the original already succeeded
Atomic Server Mapping Update with Audit Trail
Given a valid swap confirmation from the client When the server processes the request Then the device-to-patient and device-to-visit mappings are updated atomically And the update is durable and visible in APIs within 10 seconds And an immutable audit record is written containing actor ID, ISO 8601 timestamp with timezone, old and new device IDs, patient and visit IDs, and reason code And a chain-of-custody entry is appended within 5 seconds
RBAC Enforcement for Swap Initiation
Given a user without swap permissions attempts the operation When they initiate Hot Swap Then the request is rejected with 403 Forbidden And no changes are made to pairing, feeds, or mappings And an audit record of the denial is recorded Given a user with swap permissions When they initiate Hot Swap Then the operation proceeds subject to other acceptance criteria
Client Persistence and Offline Recovery
Given the app crashes or is closed during a swap When it restarts within 15 minutes Then it restores the in-progress swap state from local storage And resumes at the correct step without requiring the user to start over And no duplicate swap events are created Given network connectivity is lost during the swap When connectivity is restored Then pending API calls are retried with the same correlation ID And the final server state converges to a single successful swap within 30 seconds of restoration
Old Sensor Deactivation and Registry Revocation
Given a successful swap to the backup sensor When deactivation is triggered for the old sensor Then the mobile app unpairs from the old sensor within 5 seconds And the server marks the old sensor revoked for the visit immediately And any data received from the old sensor after revocation is rejected and not persisted And the mobile app prevents reconnection to the old sensor for the duration of the visit
Calibration State Migration & Verification
"As a caregiver, I want calibration to carry over or be quickly re-established so that readings remain accurate and compliant after the swap."
Description

Migrates applicable calibration parameters from the old sensor to the backup sensor and verifies accuracy before resuming monitoring. If direct transfer is not compatible (e.g., different model/firmware), the system runs a 30–60 second guided auto-calibration and validates baseline readings against acceptable ranges. The workflow records calibration method, parameters, and validation results, and enforces a fail-safe that blocks completion if accuracy cannot be verified. Cross-model mapping rules and firmware compatibility matrices are maintained server-side and cached client-side for offline operation.

Acceptance Criteria
Compatible Direct Migration and Verification Pass
Given the old and backup sensors are marked compatible by the firmware/mapping matrix and the old sensor has a valid calibration state When the caregiver initiates Hot Swap Then the system transfers the calibration parameters to the backup sensor And applies any cross-model parameter mapping rules And runs the verification routine against device-specific baseline thresholds And resumes monitoring only after verification passes And writes a calibration log entry including method=direct-migration, parameters transferred, validation metrics, result=pass, timestamps, user ID, and old/new sensor IDs
Incompatible Models Trigger Guided Auto-Calibration
Given the sensors are not compatible for direct transfer per the matrix When the caregiver initiates Hot Swap Then the system launches the guided auto-calibration wizard And completes the calibration in 30–60 seconds under nominal conditions And validates baseline readings within acceptable ranges for the device And resumes monitoring upon validation pass And records method=auto-calibration, parameters used, validation metrics, result=pass in the calibration log
Verification Failure Blocks Completion
Given direct migration or auto-calibration completes but validation does not meet thresholds When the system evaluates accuracy Then the workflow blocks completion and prevents monitoring from resuming And displays an actionable error with retry and manual calibration options And sends a failure notification to designated stakeholders And records method, parameters, validation metrics, result=fail, and reason codes in the calibration log
Offline Operation Using Cached Rules
Given the mobile device has no network connectivity and a cached compatibility/mapping dataset is available When the caregiver initiates Hot Swap Then the system uses the cached dataset to decide direct transfer vs auto-calibration And proceeds without network calls And queues all calibration logs and telemetry for sync when connectivity returns And if no cached dataset is available, defaults to guided auto-calibration and logs the offline decision
Firmware Compatibility Enforcement
Given the backup sensor firmware does not meet the minimum version required for calibration migration per the matrix When Hot Swap is initiated Then the system disallows direct parameter transfer And routes to guided auto-calibration And displays a non-blocking recommendation to update firmware after the visit And records the matrix version and rule ID that drove the decision in the calibration log
Audit Trail, Notes, and Chain-of-Custody Update
Given calibration verification passes (via migration or auto-calibration) When the swap completes Then the system updates visit notes with calibration method, parameter summary, and validation result And updates chain-of-custody with old/new sensor IDs, timestamps, and actor And stores an immutable, timestamped calibration log entry linked to the visit and device records
Compatibility & Safety Gate
"As an operations manager, I want pre-swap safety checks so that caregivers only deploy compliant devices and avoid regulatory or clinical risks."
Description

Evaluates swap eligibility before execution by checking device model compatibility, firmware version, battery level thresholds, maintenance/sterility status, program-specific requirements, and policy constraints (e.g., payer/regional rules). Hard blocks are enforced for noncompliant swaps with clear reasons and next-step guidance, while soft warnings are surfaced for non-critical issues. The system recommends compliant backup devices from available inventory if the selected device fails checks. All gate decisions are logged for auditability and can be configured by administrators.

Acceptance Criteria
Hard Block on Incompatible Model or Firmware
Given a primary device flagged for hot swap and a candidate backup device has been scanned, When the Compatibility & Safety Gate evaluates model pairing and minimum firmware requirements for the active program, Then the swap is hard-blocked if the model pairing is not in the allowed mapping or the candidate firmware is below the minimum, And the UI displays a blocking banner listing each failed check by name (e.g., Model Pairing, Firmware Version), the actual vs required values, and next-step guidance including at least one compliant action (e.g., update firmware, select compatible model), And the Swap action is disabled and no override control is available, And the decision is logged with timestamp, user ID, visit/patient ID, primary/candidate device IDs, model and firmware values, program ID, rule set version, failed check IDs, and outcome=Blocked, And the evaluation completes in 1.5 seconds or less for 95% of attempts.
Battery Threshold Enforcement (Hard Block vs Soft Warning)
Given program-specific battery thresholds are configured with a hard block at X% and a soft warning at Y% (where X < Y), And a candidate backup device is scanned for swap, When the gate reads the candidate battery level, Then if level < X%, the swap is hard-blocked with reason "Battery below hard threshold" and next steps (charge/choose another device), And if X% ≤ level < Y%, a soft warning is displayed with the exact level and threshold, the user can proceed only after explicit acknowledgment (checkbox or confirm), And if level ≥ Y%, no battery warning is shown, And all outcomes (hard block, warning acknowledged, no warning) are logged with the thresholds used and user acknowledgment state, And threshold values are read from the active rule set and displayed in the UI.
Maintenance, Sterility, and Calibration Validity Checks
Given the system has last maintenance date, sterility status, and calibration validity for the candidate device, When the gate evaluates the candidate against the program’s rule set, Then if any required item is expired or marked failed (maintenance overdue, sterility breach, calibration invalid), the swap is hard-blocked with individual failure reasons and dated evidence, And if any item is within the configurable early-warning window (e.g., expires within N days from today as defined in the rule set), a soft warning is shown and proceed requires user acknowledgment, And all results include the specific dates, data sources, and rule identifiers in the audit log, And the UI presents next-step guidance tailored to the failed item (e.g., "Replace sterile pack", "Send for calibration").
Program, Payer, and Regional Policy Constraint Evaluation
Given a visit is associated to a program with linked payer/regional policy rules and versions, And a candidate device is scanned for swap, When the gate evaluates policy constraints (e.g., allowed device classes, required sensors, encryption flags), Then if any mandatory policy rule fails, the swap is hard-blocked listing each violated policy by ID and title with a link to the policy detail and human-readable explanation, And if only optional guidance rules fail, a soft warning is displayed that allows proceed with user acknowledgment, And the audit log records the policy rule IDs, versions, evaluation results (pass/fail/optional), and the final decision, And the decision trace is viewable in the swap details screen.
Compliant Backup Device Recommendations from Inventory
Given the initial candidate fails the gate (or the user requests recommendations), When the system queries the organization’s available inventory with location context, Then it returns a ranked list of compliant devices that pass all hard checks for the active program, sorted by compliance score, proximity, battery level, and last maintenance recency, And each recommendation displays key attributes (model, firmware, battery %, maintenance date) and the reasons it is recommended, And selecting a recommended device pre-populates the candidate and re-runs the gate, which must pass if the data has not changed, And if no compliant devices are available, the UI states "No compliant devices found" and presents at least one actionable next step (adjust filters, request delivery), And recommendation results render within 2 seconds for 95% of queries, and the query/selection is logged.
Soft Warning Presentation and Acknowledgment Capture
Given the gate detects one or more soft-warning conditions, When the swap flow displays warnings, Then warnings are grouped, clearly labeled as non-blocking, and show the specific values and thresholds involved, And proceed is disabled until the user explicitly acknowledges each warning via a single consolidated control or per-warning checkboxes (as configured), And the acknowledgment text includes the exact warning titles and a timestamp is captured, And the audit log records the warnings presented, the user’s acknowledgment, and the final decision outcome, And removing the warning condition on re-evaluation removes the requirement for acknowledgment.
Decision Logging, Export, and Configurability Versioning
Given any gate evaluation (pass, soft-warning proceed, or hard block) occurs, When the evaluation completes, Then an immutable log entry is created capturing: timestamp, user ID and role, visit/patient ID, primary and candidate device IDs, all checks evaluated with pass/fail and measured values, thresholds used, active rule set ID and version, policy IDs/versions, decision outcome, reasons shown, user acknowledgments, and any recommended device selections, And admins can retrieve logs via an audit report by filters (date range, program, user, device, outcome) and export to CSV or JSON, And gate rule sets are configurable by authorized admins with versioning and effective dates; publishing a new version affects subsequent evaluations only and all config changes are themselves audit-logged with who/when/what, And unauthorized users cannot modify rule sets (attempts are denied and logged).
Chain-of-Custody Auto-Logging
"As a compliance officer, I want a complete, tamper-evident swap record so that audits can be passed without manual reconstruction."
Description

Automatically generates an immutable chain-of-custody record for each swap, capturing old/new device IDs, patient and caregiver identifiers, timestamps, location (if permitted), reason codes, calibration checksum, and pre/post reading snapshots. The event is digitally signed, tamper-evident, and stored in the compliance data store, with retention policies applied. The log is instantly available in one-click, audit-ready reports and exportable via API and standardized formats. Optional caregiver and supervisor e-signatures can be required based on policy.

Acceptance Criteria
Auto-log creation on successful swap
Given a caregiver completes a Hot Swap from old device ID X to new device ID Y for patient P during an active visit When the swap confirmation is submitted Then the system creates exactly one chain-of-custody event with a unique event_id And the event includes: old_device_id=X, new_device_id=Y, patient_id=P, caregiver_id=C, swap_initiated_ts, swap_confirmed_ts, reason_code ∈ [Damaged, Battery, CalibrationDrift, Lost, Other], calibration_checksum (SHA-256 hex, length=64), pre_reading_snapshot{value, unit, ts}, post_reading_snapshot{value, unit, ts} And (if location collection is permitted) latitude, longitude, and accuracy_m are recorded; otherwise location fields are null and location_reason is set And if reason_code=Other then reason_notes length is between 1 and 250 characters And the event is persisted to the compliance data store within 5 seconds and a success acknowledgment is displayed to the user
Tamper-evident digital signature and immutability
Given a chain-of-custody event is created When the event is stored Then the system computes a SHA-256 hash of the canonical event payload and signs it using ECDSA P-256 with a platform-managed private key And the stored record contains signature, public_key_kid, and payload_hash fields And any attempt to modify a stored event via UI or API is rejected (HTTP 409) and no changes are persisted And a verification endpoint validates the signature and returns signature_valid=true for the stored payload and signature_valid=false if any field is altered
Location capture respects permissions
Given agency policy permits location capture and the caregiver device has granted OS-level location permission When a swap is logged Then location fields lat, lon, accuracy_m are recorded with accuracy_m ≤ 50 and a location_ts within 2 seconds of swap_confirmed_ts Given agency policy forbids location capture or the caregiver device denies OS-level permission When a swap is logged Then location fields are null and location_reason ∈ [PolicyDisabled, PermissionDenied] And no location coordinates are stored in any field or metadata for the event
Retention policy enforcement in compliance store
Given the retention policy for chain-of-custody logs is N years (configurable) When an event is stored Then the record is tagged with retention_expiry_ts = created_ts + N years and legal_hold=false by default And the record cannot be deleted via UI or API before retention_expiry_ts And after retention_expiry_ts and if legal_hold=false, the record is purged within 30 days and a deletion tombstone with event_id, payload_hash, deleted_ts, and reason=RetentionExpiry is written And if legal_hold=true at any time, the record is not purged until the hold is removed
One-click audit-ready report availability
Given a user opens Compliance Reports for a patient visit containing a swap When the user clicks "Chain-of-Custody Report" Then a report renders within 3 seconds and includes for each swap: event_id, patient_id, caregiver_id, old_device_id, new_device_id, swap_initiated_ts, swap_confirmed_ts, reason_code, calibration_checksum, pre/post reading snapshots, location (if present), and signature verification status And timestamps are ISO 8601 in the agency time zone And the report can be downloaded as PDF with identical content and as CSV (RFC 4180) with all fields present
API export in standardized formats
Given an authenticated client with scope=compliance.read When it requests GET /api/v1/chain-of-custody?patient_id=P&from=ISO8601&to=ISO8601 Then the API responds 200 within 2 seconds for up to 1000 records and provides pagination via next_cursor And the JSON response conforms to the published JSON Schema version (schema_version present) and includes signature, payload_hash, and signature_verification_status per record And when format=csv is specified, the API returns a RFC 4180-compliant CSV with header row and the same fields
Conditional e-signatures by policy
Given agency policy requires caregiver e-signature and optional supervisor e-signature for swap events When a swap is logged Then the caregiver is prompted to e-sign and the event stores signer_id, signature_ts, and signature_method And if supervisor signature is required, a task is created and notifications are sent to the supervisor And the event signature_status is one of [Complete, PendingCaregiver, PendingSupervisor] and is reflected in reports and API And events with required signatures missing remain in a non-complete state and are flagged in compliance dashboards
Visit Note Auto-Update & Annotation
"As a caregiver, I want the visit note to auto-update with the swap details so that my documentation stays accurate without extra typing."
Description

Updates the active visit note in real time with a structured swap annotation that includes device details, timestamps, calibration status, and any care impact notes. It adjusts charts to indicate the swap point, reconciles any brief data gaps, and links to the chain-of-custody entry. The update synchronizes to the EMR/export targets and respects documentation templates by payer/program. Tasks can be auto-generated to verify post-swap readings or perform follow-up checks.

Acceptance Criteria
Real-Time Structured Swap Annotation on Active Visit Note
Given an active visit with a primary device (Device A) paired and a backup device (Device B) available When the caregiver completes a Hot Swap Then within 5 seconds the active visit note is updated with a structured Swap Annotation block And the annotation includes: old_device_id, new_device_id, device_models, firmware_versions, swap_initiator_user_id, swap_reason, local_timestamp_start, local_timestamp_complete, utc_timestamp_start, utc_timestamp_complete, calibration_status (pass/fail/not_required with values), and optional care_impact_note And the annotation is appended immutably (no overwrites), time-ordered in the note history And the update renders on mobile and web without manual refresh
Chart Visualization Marks Swap Point and Segments Data Streams
Given the visit note includes time-series charts sourced from the device When a Hot Swap completes Then a vertical Swap marker appears at utc_timestamp_complete on all charts within 5 seconds And pre-swap data is labeled with Device A and post-swap data with Device B in legends and tooltips And derived metrics (e.g., rolling averages) are recomputed per segment and do not blend across the swap boundary And the swap marker is clickable to reveal annotation details
Data Gap Reconciliation During Swap Window
Given continuous telemetry prior to swap When a swap introduces a data gap between device disconnect and reconnect Then if the gap duration is ≤ 30 seconds, the system auto-reconciles per signal policy and flags the period as auto_reconciled in the note And if the gap duration is > 30 seconds, the note displays a Data Gap entry with start/end timestamps, duration, and cause = device_swap_pending_pairing And telemetry timestamps remain monotonic and timezone-consistent with no duplicate samples at the boundary
Chain-of-Custody Linkage from Swap Annotation
Given a chain-of-custody record is generated for the device handoff When the Swap Annotation is saved Then the annotation includes chain_of_custody_id and a deep link to the exact record And following the link opens the record showing Device A → Device B transfer and calibration details And if connectivity delays record confirmation, a pending reference is stored and the link is backfilled within 60 seconds of connectivity restoration with status updated in the note
Template-Aware EMR/Export Synchronization
Given the patient’s payer/program selects a documentation template When the Swap Annotation is saved Then outbound payloads conform to the selected template’s allowed fields and redactions And updates are delivered to all configured EMR/export targets within 2 minutes or queued offline with exponential backoff and visible sync status And for templates disallowing free text, the system transmits coded swap metadata and a reference to a compliant attachment/snapshot instead of the free-text care_impact_note And the EMR/export records reference the visit note ID and swap timestamps for traceability
Auto-Generated Post-Swap Verification Tasks
Given the swap completes and calibration_status is recorded When post-swap workflows are evaluated Then tasks are auto-created per protocol: Verify post-swap readings (2 readings, 5 minutes apart), and Calibration check within 15 minutes if calibration_required = true And tasks are assigned to the correct role (caregiver or supervisor), include due times, and appear in the assignee’s task list within 10 seconds And completing or cancelling a task records outcome, user_id, timestamp, and links back to the Swap Annotation And payer/program templates can suppress or alter these tasks according to configured rules
Stakeholder Real-Time Notifications
"As a supervisor, I want timely alerts when a device is swapped so that I can confirm continuity of care and intervene if needed."
Description

Sends role-based, templated notifications about the swap to relevant stakeholders (e.g., supervisor, family, compliance) via push, SMS, or email, including deep links to the event and visit context. Notifications respect user preferences, quiet hours, and privacy settings, and they are queued for delivery with retries if offline. Throttling and deduplication prevent alert fatigue, and all deliveries are logged with status and receipt confirmations where available.

Acceptance Criteria
Role-Based Templated Notification with Deep Link
Given a hot swap event is confirmed for visit V And stakeholder roles (e.g., supervisor, family, compliance) are mapped for V And role- and channel-specific templates are configured and active When the system generates notifications for the event Then only recipients with relevant roles for V are targeted And each notification uses the correct template for the recipient’s role and channel And the message includes a deep link that opens the Hot Swap event for V with visit context in CarePulse (app if installed, otherwise web) And the deep link requires authentication and least-privilege access; unauthorized users see access denied with no PHI And SMS content excludes disallowed PHI per policy and includes a “View in CarePulse” link
Notification Delivery Latency SLO
Given normal system load and available provider connectivity When a hot swap event is confirmed Then ≥95% of first-attempt notifications are dispatched to the delivery provider within 5 seconds of event confirmation And ≥95% are reported as delivered by provider within 10 seconds for push and SMS, and within 60 seconds for email And dispatch and delivery timestamps are captured for every notification to validate the SLO
Quiet Hours and PHI Redaction
Given a recipient has quiet hours configured from H1 to H2 And a hot swap event occurs during that interval When a non-urgent notification is generated Then the notification is suppressed and queued for delivery at the next allowed time And the queued notification retains event context and is not duplicated if a later real-time send occurs outside quiet hours And notifications marked compliance-critical bypass quiet hours per policy And all SMS/email content applies the recipient’s privacy settings with required PHI redaction
User Preferences, Opt-Out, and Channel Fallback
Given a recipient has channel preferences with priority order (e.g., push > SMS > email) And the recipient has opted out of certain channels When a notification is sent for a hot swap event Then delivery is attempted via the highest-priority allowed channel only And opted-out channels are never used And if the highest-priority channel fails (e.g., app uninstalled, provider error), the next allowed channel is attempted within 30 seconds And all preference applications and fallbacks are logged with reason codes
Queueing, Retries, and Offline Delivery
Given the notification cannot be dispatched due to service or network outage When the system queues the notification Then retries occur with exponential backoff (1m, 5m, 15m, 30m, 60m, 60m) up to 6 attempts or a TTL of 2 hours, whichever comes first And retries stop immediately upon confirmed delivery via any channel And if TTL expires without delivery, the attempt is marked Failed and an escalation notification to the supervisor is enqueued And the queue is durable across restarts and preserves per-recipient ordering
Throttling and Deduplication of Swap Alerts
Given multiple hot swap events occur for the same visit within a short period When notifications are generated Then at most one notification per recipient per visit is sent within a 5-minute window And subsequent events within the window are summarized into a single update at window close with a count of swaps and latest status And exact duplicates (same event ID, recipient, channel) are never re-sent And manual resend requests bypass dedup only with a unique reason code captured in logs
Delivery Logging and Receipt Confirmations
Given notifications have been dispatched When an auditor or authorized user views the notification log Then 100% of attempts include: event ID, visit ID, recipient ID, role, channel, template ID, timestamps (queued, dispatched, provider accepted, delivered, opened where available), outcome, retry count, dedup/throttle actions, and actor And push read/open receipts are captured when supported; SMS delivery receipts when supported; email open tracking when policy permits And logs are exportable within 24 hours and retained for 7 years And access to logs enforces least privilege and redacts PHI for unauthorized viewers

PairLock

Client-locked pairing using QR codes and geofenced checks to prevent cross‑patient binds. Warns on mismatches, quarantines unknown devices, and enforces unpair workflows with reason codes. Protects data integrity and makes audits straightforward.

Requirements

Secure QR Client-Locked Pairing
"As a caregiver, I want to pair devices to the correct client by scanning a QR code so that I can avoid mistakes and start my visit quickly."
Description

Implements a secure, client-specific QR-based pairing flow that binds caregiver apps to approved devices/sensors only for the intended client. QR codes encode an ephemeral, signed token (client ID, visit ID, scope, expiration) with no PHI, verified on-device before establishing the association over BLE/NFC/Wi‑Fi. Tokens are single-use and time-bound to prevent reuse. The flow prevents manual identifier entry, guides caregivers through a simple scan-and-confirm UI, and writes an immutable, tamper-evident pairing record containing token metadata, device fingerprint, caregiver ID, and timestamps. Integrates with CarePulse scheduling and client profiles to fetch valid tokens and ensures pairing is tied to the active visit context.

Acceptance Criteria
Successful Pairing via Secure QR during Active Visit
Given a caregiver is checked into an active visit for client C with visit ID V and is within the configured geofence radius When the caregiver taps "Pair Device", scans a QR code token for client C and visit V, and confirms Then the app verifies the token signature against the current public key, validates token fields (client ID=C, visit ID=V, scope=device-pairing, expiration > current time), and confirms the QR payload contains no PHI And manual identifier entry is unavailable/disabled throughout the flow And the app establishes the device association over the available transport (BLE/NFC/Wi‑Fi) within 10 seconds of confirmation And the token is marked single-use and cannot be reused And a confirmation screen displays client C, visit V, and the device fingerprint And an immutable pairing record is written containing token ID, client ID, visit ID, scope, expiration, device fingerprint, caregiver ID, and timestamps (scan and pair) And the pairing is visible in the active visit context
Token Invalid, Expired, or Reused — Pairing Blocked
Given a caregiver attempts to pair during an active visit When the scanned QR token is expired OR the signature is invalid OR the token ID has already been used Then the app blocks pairing and shows an explicit error reason (Expired, Invalid Signature, or Already Used) And no device association is created And invalid/expired tokens are not marked as used; already-used tokens remain used And an audit event is recorded with token ID (if present), caregiver ID, timestamp, failure reason, and client/visit context And the caregiver is offered options to rescan or contact support
Cross-Client or Out-of-Geofence Scan — Pairing Prevented
Given a caregiver is in an active visit for client B with visit ID VB When the caregiver scans a token for a different client A or the device is outside the configured geofence Then the app blocks pairing and displays a mismatch warning specifying the detected client/geofence issue And no pairing record is created; instead a failed-attempt audit event is logged with caregiver ID, location, scanned token client ID, visit context, and timestamp And the UI provides actions to switch to the correct visit (if available) or dismiss
Manual Identifier Entry Disabled
Given the caregiver is in the device pairing flow When the caregiver attempts to type or paste a device MAC/ID or bypass the scan step Then the app prevents manual entry and paste actions and keeps the scan step mandatory And the only permitted initiation methods are camera QR scan or NFC tap-to-scan And without a valid scanned token, the "Confirm Pair" action remains disabled And an accessibility-friendly scan alternative is available (e.g., high-contrast, torch, voice guidance)
Immutable Tamper-Evident Pairing Record Created
Given a successful pairing has completed When an auditor or admin retrieves the pairing record via UI or API Then the record contains token metadata (token ID, client ID, visit ID, scope, expiration), device fingerprint, caregiver ID, and timestamps (scan, pair) And the record includes a cryptographic hash and previous-hash reference for tamper evidence And any attempt to modify the record via API is rejected with 403; corrections require an append-only superseding record with a reason code and linkage to the original And exported audit reports show the original and any superseding records with full lineage
Unknown Device Quarantine and Enforced Unpair with Reason Codes
Given a client's profile has an approved device list and quarantine policy enabled When a caregiver attempts to pair a device not on the approved list Then the app places the device in quarantine state, blocks activation, and notifies the designated supervisor/admin And an audit event records device fingerprint, caregiver ID, client/visit context, and quarantine reason And only a user with Supervisor or higher role can approve activation by entering a required reason code and confirming And when unpairing any device, the app requires selecting a reason code from a configured list, captures caregiver ID and timestamp, and writes an immutable unpair record And a quarantined or unpaired device cannot automatically re-pair without a new valid token and required approvals
Geofence Validation & Visit-Window Enforcement
"As an operations manager, I want PairLock to validate location and timing when pairing so that cross-patient binds and early or late misuse are prevented."
Description

Validates each pairing attempt against configurable client geofences and scheduled visit windows to prevent cross-patient binds and misuse. Uses on-device GPS (with mock-location detection), network fallback, and accuracy thresholds to confirm the caregiver is within the allowed radius and within the appointment time (with grace periods). Policy-driven outcomes allow block, warn, or supervisor override. Captures precise lat/long, accuracy, and timestamp in the pairing record. Integrates with route sync and scheduling so validations align with live routes and planned visits; supports per-client radius configuration and agency-wide defaults.

Acceptance Criteria
Successful pairing within geofence and scheduled visit window
Given a caregiver initiates PairLock for client C with an active appointment A today And the device location is available via GPS with accuracy <= configuredGpsAccuracyMeters or via network with accuracy <= configuredNetworkAccuracyMeters And the caregiver's location distance to client C geofence center <= geofenceRadiusMeters And the current time is within A.startTime to A.endTime in the client's time zone When the caregiver confirms pairing Then the system approves the pairing without warnings And the pairing record stores latitude, longitude, accuracyMeters, locationSource, timestampUTC, appointmentId, clientId, geofenceRadiusMetersUsed, policyVersion, and outcome = Approved
Block pairing when outside geofence under Block policy
Given agency policy for out-of-geofence attempts is Block And a caregiver initiates PairLock for client C with an active appointment A And device location accuracy meets configured thresholds And the calculated distance to client C geofence center > geofenceRadiusMeters When the caregiver attempts to pair Then the system blocks pairing with an error indicating Outside geofence And no pairing link is established And an audit event is recorded with latitude, longitude, accuracyMeters, timestampUTC, distanceMeters, clientId, appointmentId, policyVersion, and outcome = Blocked
Warn and allow pairing within configured grace period
Given agency policy for in-grace-window attempts is WarnAllow And a caregiver initiates PairLock for client C with appointment A today And device location accuracy meets configured thresholds and distance to geofence center <= geofenceRadiusMeters And current time is within [A.startTime - preVisitGraceMinutes, A.startTime) or within (A.endTime, A.endTime + postVisitGraceMinutes] in the client's time zone When the caregiver acknowledges the warning prompt Then the system allows pairing And the pairing record includes warningType = GraceWindow, userAcknowledged = true, acknowledgmentTimestampUTC, and all standard location fields; outcome = ApprovedWithWarning
Supervisor override for outside visit window beyond grace
Given agency policy for out-of-window-beyond-grace attempts is SupervisorOverride And a caregiver initiates PairLock for client C And device location accuracy meets thresholds and distance to geofence center <= geofenceRadiusMeters And no current appointment exists in the client's time zone within the configured grace windows When a supervisor authenticates successfully and enters an override reason code Then the system approves pairing under override And the record includes overrideByUserId, overrideReasonCode, overrideTimestampUTC, supervisorAuthMethod, and all standard location fields; outcome = Overridden
Enforce accuracy thresholds with network fallback
Given a caregiver initiates PairLock for client C And GPS fix accuracy > configuredGpsAccuracyMeters And the system attempts network-based location and accuracy > configuredNetworkAccuracyMeters And this process continues until maxWaitSeconds elapse When no location source meets accuracy thresholds within maxWaitSeconds Then the system prevents pairing and displays Location accuracy insufficient And the attempt is logged with lastKnownLatitude, lastKnownLongitude, gpsAccuracyMeters, networkAccuracyMeters, sourcesTried[], attemptsCount, elapsedMilliseconds, and outcome = BlockedAccuracy
Detect and block mock-location usage
Given the device reports mock-location enabled or spoofing indicators are detected by the app When the caregiver attempts to pair Then the system blocks pairing, flags the session for review, and notifies the designated supervisor per notification policy And the record includes mockLocationDetected = true, spoofIndicators[], timestampUTC, deviceId, userId, and outcome = BlockedMockLocation
Validate against live routes and rescheduled appointments
Given the caregiver's route and schedule have been synced within syncIntervalMinutes And appointment A for client C has updated start/end times When the caregiver attempts to pair Then validation uses the latest A.startTime/A.endTime and geofenceRadiusMeters where client-specific radius overrides agencyDefaultRadiusMeters when set And the pairing decision reflects the updated schedule And the record stores scheduleVersionId, scheduleSyncedAtUTC, radiusSource = Client or AgencyDefault, and outcome consistent with validation
Unknown Device Quarantine & Approval Queue
"As an admin, I want unrecognized devices to be quarantined until approved so that only trusted hardware is used with clients."
Description

Automatically quarantines unrecognized devices on detection, blocking pairing until an admin approves. Builds a device fingerprint from MAC/serial/BLE IDs and classifies by sensor type. Presents an admin approval queue with device details, attempted client, caregiver, time, and geolocation. Supports allowlists/denylists and auto-approval rules by vendor/model. Sends notifications to admins for pending approvals and logs quarantine reasons and outcomes. Approved devices can be assigned to a client or the organization with effective dates; denied devices are blocked with rationale recorded in compliance logs.

Acceptance Criteria
Auto-Quarantine on First Detection
Given an unrecognized device (not in org allowlist or approved devices) attempts to pair or transmit within an org geofence When the system detects the device's MAC, serial, and BLE IDs Then the device is placed in quarantine within 5 seconds And pairing is blocked and a warning is shown to the caregiver in-app within 2 seconds of the block And a device fingerprint is computed from MAC, serial, and BLE IDs and stored with quarantine reason "Unrecognized device" And the attempted client ID, caregiver ID, UTC timestamp, and geolocation are captured And an approval queue item is created and visible to org admins
Fingerprint Generation and Uniqueness
Given two detection events from the same device with identical MAC, serial, and BLE IDs When fingerprints are generated Then the fingerprints are identical and map to the same quarantine record (no duplicate queue items for that device within 10 minutes) Given a validation dataset of 10,000 devices with varied MAC/serial/BLE combinations When fingerprints are generated Then all fingerprints are unique within the dataset (0 collisions) Given a detection where one identifier is missing When a fingerprint is generated Then canonicalization uses placeholders for missing fields and preserves uniqueness within the dataset
Sensor Type Classification
Given a quarantined device with vendor and model metadata or BLE advertisement data When classification runs Then a sensor type is assigned from the catalog if a vendor/model match exists And if no catalog match exists, sensor type is set to "Unknown" And the detected vendor and model values are preserved even when type is "Unknown" And the resulting sensor type is displayed in the approval queue item
Admin Approval Queue Content and Actions
Given an org admin opens the approval queue Then each item displays: device fingerprint, vendor, model, sensor type, attempted client, caregiver, UTC timestamp, geolocation (lat/long and geofence name), and quarantine reason And actions Approve and Deny are available on Pending items When Approve is selected Then the admin must choose assignment to a specific client or the organization and set an effective start date (default now) and optional end date And if effective start is now or in the past, the device exits quarantine immediately and may pair only according to the selected assignment and PairLock constraints And if effective start is in the future, the device remains blocked until that start date/time When Deny is selected Then the admin must select a reason code from a configurable list and may add optional notes And the device fingerprint is added to the org denylist and the queue item is marked Resolved
Allowlists, Denylists, and Auto-Approval Rules
Given an auto-approval rule for vendor/model A When a matching device is detected within an org geofence Then the device is auto-approved and assigned to the organization with effective start = detection time And the approval queue records the item as Resolved with decision "Auto-Approved" and the triggering rule ID Given a denylist rule for vendor/model B When a matching device is detected Then the device is blocked within 2 seconds with reason "Denied by policy" And no manual approval is allowed; the queue item is marked Resolved with decision "Denied by policy" And denylist rules take precedence over allowlist/auto-approval rules when both match Given a per-fingerprint allowlist entry When the matching device is detected Then the device is auto-approved even if the vendor/model has no allow rule
Admin Notifications for Pending Approvals
Given admin notification preferences are enabled When a new quarantine item requiring manual review is created Then all on-call org admins receive an in-app notification immediately and an email within 2 minutes And a device does not generate more than one notification per admin within a 30-minute window (deduplication) And tapping/clicking the notification opens the specific approval item And auto-approved or policy-denied items do not trigger manual-approval notifications
Compliance Logging and Outcomes
Given any quarantine, approval, denial, auto-approval, or block event occurs When the event is committed Then a compliance log entry is recorded containing: event type, device fingerprint, vendor, model, sensor type, caregiver ID, attempted client ID, UTC timestamp, geolocation, rule ID (if any), admin actor (if any), decision, and reason code/message And log entries are immutable; any correction creates a new append-only entry linked to the original And exporting logs to CSV for a selected date range returns entries that match the on-screen log for that range exactly (0 discrepancy) And the number of Resolved items in the approval queue for a date range equals the number of corresponding decision log entries for the same range
Mismatch Detection & Real-time Alerts
"As a supervisor, I want to be alerted immediately when a pairing mismatch occurs so that I can intervene and maintain data integrity."
Description

Detects and responds to pairing mismatches including client-device mismatches, out-of-geofence attempts, reused or expired tokens, or caregiver not assigned to the visit. Provides immediate in-app warnings with clear resolution steps, enforces policy-based blocks, and issues real-time notifications (push/email) to supervisors. Automatically creates incident records with metadata (who, when, where, device fingerprint, visit context) and requires acknowledgment. Includes alert throttling and escalation rules for repeated violations. Integrates with CarePulse notifications, compliance dashboards, and audit logging.

Acceptance Criteria
Client-Device Mismatch at Pairing
Given a caregiver is in an active visit for Client B and scans a device QR that is registered to Client A When the caregiver attempts to pair Then the app displays a blocking banner "Client-device mismatch" within 2 seconds and shows step-by-step resolution (unpair, switch visit, or rescan correct QR) And Then pairing is blocked until the mismatch is resolved per agency policy And Then an incident record is auto-created with caregiverId, expectedClientId, attemptedClientId, visitId, timestamp (UTC), GPS lat/lon/accuracy, deviceFingerprint, appVersion, and policySnapshot And Then a push notification is sent to assigned supervisor(s) within 10 seconds and an email within 60 seconds And Then the incident requires supervisor acknowledgment before the same device can be paired to the client again within the next 15 minutes if enforceAcknowledgment=true
Out-of-Geofence Pairing Attempt
Given the device has a location fix with accuracy ≤ 30 meters and the caregiver is ≥ 20 meters outside the configured geofence radius for the scheduled visit When the caregiver attempts to pair or start the visit Then the app shows a blocking warning "Out of geofence" within 2 seconds And Then if blockOutOfGeofence=true, pairing/start is blocked; otherwise the caregiver must select an override reason code and add an optional note to proceed And Then location (lat, lon, accuracy), geofenceId, and reason (blocked/overridden) are stored with the incident record And Then push and email notifications are sent to supervisor(s) within 10 and 60 seconds respectively
Reused or Expired Token Handling
Given a QR/token is scanned that is expired or previously redeemed When the scan is processed Then the app blocks pairing and displays "Token expired or already used" within 2 seconds with steps to request a new token And Then the token is invalidated (cannot be redeemed again) And Then the device attempting to bind via the invalid token is quarantined and cannot be paired to any client until a supervisor releases it via the Unpair/Release workflow with a reason code And Then an incident is created with tokenId, reason (expired/reused), caregiverId, clientId (if known), visitId (if any), timestamp (UTC), location, and deviceFingerprint And Then supervisor(s) receive push within 10 seconds and email within 60 seconds
Caregiver Not Assigned to Visit
Given a caregiver is not assigned to the scheduled visit for the selected client in the current time window When the caregiver attempts to pair a device or start the visit Then the app blocks the action and displays "Not assigned to this visit" with clear resolution steps (contact scheduler or request temporary assignment) And Then if policy allowTemporaryAssignment=true, an approval request is sent to the supervisor; until approved, the action remains blocked And Then an incident is created with assignmentStatus=unassigned and full metadata; supervisor(s) are notified via push/email within 10/60 seconds And Then the compliance dashboard shows the incident within 1 minute of creation
Alert Throttling and Escalation
Given multiple violations of the same type occur from the same caregiver or device within a 15-minute window When the Nth violation occurs (default N=3, configurable 1–5) Then the system throttles duplicate push/email to at most one notification every 5 minutes per violation type and source And Then subsequent events are aggregated into a digest notification sent every 15 minutes while violations continue And Then an escalation alert is sent to the on-call manager when thresholdExceeded=true (e.g., ≥3 violations within 30 minutes), with severity upgraded And Then the compliance dashboard aggregates counts while the audit log retains every individual event with unique IDs
Incident Record Completeness and Acknowledgment
Given any mismatch-related alert is generated When the incident record is saved Then it contains: incidentId, incidentType, reason, caregiverId, clientId, visitId, deviceFingerprint, deviceOS, appVersion, timestamp (UTC), GPS lat/lon/accuracy, geofenceId (if applicable), policySnapshot, networkStatus, notificationStatus, and acknowledgmentStatus And Then a supervisor must acknowledge the incident before it is marked Resolved; reminders are sent at 2, 8, and 20 hours until acknowledgment or closure by policy exception And Then acknowledgment captures userId, timestamp, resolutionCode (from list), and optional note; all changes are audit-logged and immutable
Notifications, Dashboard, and Audit Integration
Given mismatch events occur When viewing the CarePulse notification center Then alerts display correct severity, read/unread state, and deep link to the incident detail And When filtering the compliance dashboard by date range, client, caregiver, or violation type Then incident counts and lists match the audit log totals for the same filters within ±1 minute of UTC timestamps And Then exporting incidents to CSV includes all required metadata fields and completes within 15 seconds for up to 5,000 incidents And Then the audit log exposes an immutable entry per event with request/response hashes and actor IDs, retrievable via API within 30 seconds for up to 10,000 events
Enforced Unpair Workflow with Reason Codes & Audit Trail
"As a compliance officer, I want unpair actions to require reason codes and be fully logged so that audits are straightforward and defensible."
Description

Provides a guided unpairing process requiring selection of standardized reason codes (e.g., device replacement, client discharge, correction of error) with optional notes, photos, and supervisor approval for high-risk categories. Enforces role-based permissions to restrict who can unpair and under what conditions. Captures full context (user, timestamps, geolocation, device fingerprint, associated visit) and records to an immutable audit log. Prevents silent or accidental unpairing and syncs all unpair events into compliance reports and client/device histories.

Acceptance Criteria
Role-Based Unpair Permissions Enforcement
Given a user is authenticated and viewing a client’s paired device details And the user lacks the Unpair permission for that client/device per RBAC policy When the user attempts to initiate the Unpair workflow Then the Unpair action is not shown or is disabled And any direct API attempt returns 403 Forbidden with code UNPAIR_FORBIDDEN And the UI displays "You do not have permission to unpair this device" And no change occurs to the pairing state And an access-denied event is logged with userId, role, clientId, deviceId, timestamp Given a user has the Unpair permission and all preconditions are met When the user clicks Unpair Then the Unpair Wizard opens at Step 1 (Reason Selection)
Guided Unpair With Reason Codes, Notes, and Media
Given the Unpair Wizard is open at Step 1 When the user has not selected a reason code Then the Continue button remains disabled Given standardized reason codes are loaded [Device Replacement, Client Discharge, Correction of Error, Reassignment within Client, Other] When the user selects Other Then a Notes field becomes required with a minimum of 10 characters Given optional evidence upload is available When the user attaches photos Then only JPG/PNG up to 3 files and 5 MB each are accepted; others are rejected with message Given the user proceeds to Review step When they click Confirm Unpair Then a confirmation modal requires explicit confirmation (checkbox + Confirm button) and shows client, device, reason summary And if user cancels, no changes are made Given the user confirms When the request is valid and any required approvals are satisfied Then the device is unpaired And the unpair event includes reasonCode, notes, media references And the UI shows success state within 2 seconds Given the user attempts to back out after making changes When they navigate away Then a prompt warns of unsaved unpair changes to prevent accidental unpair initiation
High-Risk Unpairs Require Supervisor Approval
Given the user selects a high-risk reason [Client Discharge, Correction of Error] When proceeding to submit Then the system requires supervisor approval before executing unpair Given an approval request is generated When routed Then only users with role=Supervisor (or higher) and in the same organization scope can approve/deny And the approver must provide an approval decision and optional comment Given the approval request remains pending When 15 minutes elapse without decision Then the unpair request expires and is marked TIMED_OUT and no unpair occurs Given the supervisor approves When the decision is recorded Then the unpair executes immediately And the audit trail captures approverId, decision, comment, timestamps Given the supervisor denies When the decision is recorded Then the unpair is blocked And the requester is notified with denial reason
Client Match & Geofence Validation With Blocking Rules
Given the device is currently paired to Client A When a user initiates unpair from Client B’s profile Then the system blocks the workflow with message "Device not paired to this client" And returns error code CLIENT_MISMATCH Given the user is outside the client’s geofenced service area When initiating unpair Then the system warns "Outside client geofence" And if reason is high-risk, supervisor approval is required regardless of user role And if reason is not high-risk, the user may proceed only if they provide a note (min 10 characters) explaining the location exception Given location services are denied or unavailable When initiating unpair Then the system requires the user to either enable location or obtain supervisor approval to proceed
Full Context Capture for Unpair Event
Given an unpair action completes (success or failure) When recording the event Then the system captures: userId, userRole, clientId, deviceId/serial, visitId (if within scheduled window ±60 minutes), appVersion, deviceFingerprint, networkState (online/offline), geolocation (lat, long, accuracy), geofenceStatus (inside/outside/unknown), timestamps (started, approved, completed), reasonCode, notes, mediaRefs, approval metadata (if any), result (success/denied/timeout/error), errorCode (if any) Given geolocation is captured When accuracy > 100 meters Then geofenceStatus is set to unknown and flagged in the audit record Given the event is stored When retrieved Then all fields are present and non-null where required (userId, clientId, deviceId, reasonCode, timestamps.started, timestamps.completed, result)
Immutable Audit Log and Append-Only Corrections
Given an unpair event is written to the audit log When any user attempts to edit or delete it Then the system prevents modification of the original record And returns 405 Method Not Allowed in the API and shows "Audit records are immutable" in UI Given a correction is needed When submitted Then a new audit record is appended with type=CORRECTION referencing the original eventId and includes who/when/why And both records are linked bidirectionally Given audit export is generated When computing integrity Then each record includes a content hash and sequence number to provide tamper evidence And any mismatch between stored hash and recomputed hash is flagged in the export summary
Reporting Sync to Compliance and Histories (Online/Offline)
Given an unpair completes while online When 5 minutes elapse Then the event appears in the Compliance > Unpair Events report, the client timeline, and the device history And it is included in the next downloadable audit CSV export Given an unpair completes while offline When connectivity is restored Then the event syncs within 5 minutes And the synced record preserves the original timestamps and offline flag Given the compliance report is filtered by date range, client, device, and reason When filters are applied Then the unpair event is returned when it matches all selected filters Given a scheduled weekly compliance export When the export runs Then all unpair events from the period are included with full context fields and approval metadata
Audit-Ready Pairing Reports & Exports
"As an agency owner, I want audit-ready pairing reports so that I can quickly satisfy payor and regulator requests."
Description

Generates configurable, one-click reports that summarize pairing and unpairing activity by client, caregiver, device, and date range, including geofence validations, visit linkage, violations, quarantine outcomes, and reason codes. Produces exportable CSV and PDF with digital signatures, timestamps, and integrity checksums. Supports secure, expiring share links and fine-grained filters. Integrates with CarePulse’s reporting module, enabling agencies to satisfy auditor and payor requests quickly with clear, traceable evidence.

Acceptance Criteria
One-Click PDF Report for Auditor Requests
Given an authorized agency user in Reporting > PairLock > Pairing Reports with at least one pairing/unpairing event in the selected date range When they click "Generate PDF" with filters for client(s), caregiver(s), device(s), and date range Then a paginated, non-editable PDF is generated within 10 seconds for ≤10,000 events or within 60 seconds for ≤100,000 events And the PDF header includes agency name, report title, generated-by user, UTC timestamp, and selected filter parameters And each event row includes event type (pair/unpair), event ID, client ID/name, caregiver ID/name, device ID/serial, event timestamp (UTC and local agency timezone), geofence validation result (Pass/Fail/Skipped) with lat/long and distance from perimeter in meters, linked visit ID (or "None"), violation code(s) if any, quarantine status/outcome, and unpair reason code (if applicable) And the PDF footer displays page X of Y and the report SHA-256 checksum And the PDF is digitally signed (X.509) with a CarePulse certificate; signature status is "Valid" in common PDF readers
CSV Export with Integrity Artifacts
Given an authorized user views pairing/unpairing results with active filters When they select "Export CSV" Then a UTF-8, RFC 4180–compliant CSV downloads with one header row and one data row per matching event And columns mirror the PDF schema; timestamps are ISO 8601 in UTC; distances are meters with two decimals; booleans are TRUE/FALSE And a separate .sha256 checksum file is provided and matches the CSV's SHA-256 hash And values with commas/quotes are properly quoted; delimiter is comma; line endings are CRLF And exports >100,000 rows are streamed and complete within 2 minutes or return a retryable job link with status polling
Fine-Grained Filters and Result Parity
Given filters for client, caregiver, device, date range, event type, violation status, quarantine status, geofence outcome, and visit linkage When any combination of filters is applied Then the on-screen table, PDF, and CSV include exactly and only events satisfying all selected filters And date range supports absolute and relative presets (Today, Last 7 days, Custom) and is inclusive of start and exclusive of end at millisecond precision And an empty result set renders a valid PDF stating "0 events" and a CSV with only the header row And clearing filters resets to default (Last 7 days; all entities) and refreshed counts render within 2 seconds on typical 4G networks
Secure Expiring Share Links
Given a generated report artifact (PDF and/or CSV) When the user creates a share link with an expiry between 15 minutes and 30 days Then a single-purpose, unguessable URL is created with a time-limited token and no PHI in the URL And the link grants read-only access to the specified artifact(s) only; attempts after expiry or revocation return HTTP 410 with no body And accesses are logged with timestamp, IP, user agent, and user (if authenticated) and are viewable in the report's access log And the owner can revoke the link; revocation propagates globally within 60 seconds And downloaded artifacts via share links have identical SHA-256 hashes to the originals
Audit Traceability and Cross-References
Given any event included in a report When an auditor requests verification Then the report displays an immutable event ID, linked visit ID (or None), caregiver and client identifiers, and device identifier And deep-links (role-gated) to the visit, caregiver, client, and device records are available to authorized users And geofence validations display perimeter ID/name, method (GPS/Wi‑Fi/Cell), measured distance in meters, and decision outcome And violations and quarantines list violation code(s), quarantine resolution (auto-release/manual override), resolution timestamp, and actor And unpair events require a reason code from a controlled list; optional free-text (≤256 chars) is included in exports when present
Reporting Module Integration and Permissions
Given a user with Reporting > PairLock permissions When they navigate to Reporting > Pairing Activity Then the "Audit-Ready Pairing Reports" tile is visible and opens within 2 seconds on broadband and 5 seconds on 4G And users without permission cannot see the tile and receive HTTP 403 on direct endpoint access And users can save, update, and reuse report configurations (filters/columns); defaults are user-scoped with optional team sharing And all generate/export/share actions are audit-logged with user, action, timestamp, and artifact identifiers
Offline Pairing with Deferred Validation & Reconciliation
"As a caregiver in low-connectivity areas, I want to pair devices offline with confidence that checks will run later so that I can continue my visit without delays."
Description

Enables caregivers to perform QR-based pairing while offline by capturing the token, device fingerprints, and local GPS, marking the association as provisional. Defers geofence, policy, and token checks until connectivity resumes, then reconciles automatically. If validation fails, triggers incident workflows, notifies supervisors, and optionally auto-unpairs per policy. Clearly indicates provisional vs. confirmed states in the UI and can block visit documentation submission until confirmation. Handles clock drift by requiring network time sync during reconciliation and preserves a robust local audit trail for offline periods.

Acceptance Criteria
Offline QR Scan Creates Provisional Pairing
Given a caregiver is authenticated in CarePulse and the device has no network connectivity When the caregiver scans a client QR code in PairLock Then the app captures and stores locally the QR token, device fingerprint(s), caregiver ID, client ID, GPS coordinates, and a timestamp within 5 seconds And the GPS sample has accuracy ≤ 50 meters; if not achieved, the scan is blocked and a retry prompt is shown And a provisional pairing record is created, encrypted at rest, and displayed in the UI with a clear “Provisional” badge And the caregiver cannot create a second pairing for another client until the provisional pairing is resolved (confirmed, canceled, or failed) And the provisional record persists across app restarts and device reboots
Automatic Reconciliation After Connectivity Restored
Given a provisional pairing exists on the device And network connectivity is restored When the app detects connectivity and successfully syncs network time Then the system validates the QR token (authentic, unexpired, matches client), policy rules (caregiver-client pairing allowed, device allowed), and geofence (within configured radius of client service location) And on success marks the pairing as Confirmed, updates the UI state within 1 second, and records a reconciliation event with server timestamp And the reconciliation process starts within 30 seconds of connectivity detection (foreground or background) And the local provisional record transitions to confirmed without user intervention
Validation Failure Triggers Incident and Auto-Unpair
Given a provisional pairing is undergoing reconciliation When any validation check fails (token mismatch/expired, geofence violation, policy violation, device not recognized) Then an incident record is created with failure reason, caregiver, client, device fingerprint, GPS, and timestamps And the caregiver and assigned supervisor receive an in-app alert immediately and an email notification within 60 seconds And if org policy “auto-unpair on validation failure” is enabled, the device is quarantined for that client and the pairing is auto-unpaired And the caregiver is required to acknowledge the incident banner before proceeding, and an unpair reason code is captured if a manual unpair occurs And the UI reflects a “Provisional - Failed” state until resolved, and all actions are audit-logged
Submission Blocked While Pairing Is Provisional
Given a caregiver is documenting a visit for a client with a provisional pairing When the caregiver attempts to submit the visit documentation Then submission is blocked with a clear message stating that pairing confirmation is required, with a deep link to the PairLock screen And once the pairing becomes Confirmed, the submit action is enabled without needing to re-enter data And if org policy “allow provisional submission” is enabled, the caregiver can submit but the record is flagged and routed for supervisor review And all block/allow decisions are persisted in the audit log with timestamps and policy version
Network Time Sync and Clock Drift Handling
Given a device has a provisional pairing and local time drift relative to network time When reconciliation begins Then the app performs a network time sync and records the drift value And if absolute drift > 2 minutes, reconciliation uses network time for validation windows and timestamps And if network time cannot be obtained within 30 seconds, the pairing remains Provisional and the caregiver is informed to retry; no confirmation occurs And the drift value and time sync outcome are written to the audit log and included in any incident created
Offline Audit Trail Creation and Server Verification
Given any pairing-related action occurs while offline (scan, cancel, unpair, acknowledge) When the action is performed Then an append-only, hash-chained local audit entry is created containing action type, user, client, device fingerprint, QR token hash, GPS, and timestamps And upon reconnection, all pending audit entries are uploaded within 60 seconds And the server verifies hash-chain integrity and sequence; on any integrity failure, an incident is created and the affected pairing is flagged And uploaded audit entries appear in the PairLock Audit report, filterable by client, caregiver, device, and time range

Offline Buffer

Locally caches readings when connectivity hiccups occur, then backfills them to the chart with exact timestamps and provenance once online. Shows queued/posted status so caregivers know nothing was lost. Maintains clinical completeness and EVV accuracy in dead zones.

Requirements

On-device Encrypted Buffer
"As a caregiver working in dead zones, I want my visit data to save securely on my phone when offline so that nothing is lost and I can continue care without interruption."
Description

Locally stores all visit events (check-in/out, vitals readings, route pings, voice note attachments, sensor packets) when connectivity is unavailable or unstable. Uses an encrypted write-ahead log with per-record metadata (UTC timestamp, device clock offset, caregiver ID, visit ID, source type, geolocation sample, integrity hash). Writes are atomic and resilient to app restarts and OS kills. Integrates with CarePulse data models so records can be replayed exactly once to the server. Ensures continuity of documentation, preserves EVV-critical fields, and isolates PHI at rest with platform keychain/keystore.

Acceptance Criteria
Crash-Safe Atomic Writes During Offline Capture
Given the device has no connectivity or unstable connectivity And a visit is in progress capturing events of supported types When the app is force-closed or the OS kills the process during a write Then the write-ahead log persists either a fully committed record or no record for that write (no partial records) And after relaunch the buffer recovers without manual user action And previously committed records remain intact and in original order And no duplicate records are present
Per-Record Metadata Completeness and Format
Given an offline-captured event of each supported type (check-in, check-out, vitals reading, route ping, voice note attachment, sensor packet) When the event is written to the buffer Then the record contains: UTC timestamp, device clock offset, caregiver ID, visit ID, source type, geolocation sample or explicit reason code, integrity hash, and unique record ID And all fields conform to expected formats and value ranges And missing or invalid fields cause the write to be rejected with an explicit error, leaving the buffer unchanged
Exactly-Once Replay After Connectivity Restoration
Given N records are buffered while offline And the server supports idempotent ingest using the record ID and integrity hash When connectivity is restored Then records are uploaded in original capture order And each record is accepted exactly once by the server even under retry conditions And upon acknowledgment each record is removed from the buffer and not retried And if acknowledgment is not received, the client retries with the same idempotency identifiers until success without creating duplicates on the server
Encryption at Rest with Platform Keystore/Keychain
Given records are buffered on-device When inspecting on-disk artifacts outside the app sandbox Then no PHI appears in plaintext And WAL and attachment blobs are encrypted at rest using keys stored in the platform keystore/keychain And decryption is only possible within the app process when the device is unlocked And on app sign-out or uninstall, encryption keys are destroyed and buffered data becomes irrecoverable And buffered files are excluded from cloud backups according to platform guidelines
EVV-Critical Field Preservation on Backfill
Given check-in and check-out events are captured offline When they are replayed to the server Then the server receives the original UTC timestamps and geolocation samples captured at event time And EVV-required fields (caregiver ID, visit ID, timestamp, location sample or explicit reason code) are present and unchanged from capture And audit logs reflect provenance using the buffered record's metadata
Resume Behavior Under Flapping Connectivity
Given connectivity repeatedly alternates between offline and online states during a visit When events are captured and multiple partial replay attempts occur Then no events are lost or duplicated in the buffer or on the server And replay always resumes from the last acknowledged record without reordering And buffer write throughput remains unaffected by replay activity
Data Model Compatibility and Error Handling
Given buffered records map to CarePulse domain models When records are replayed Then each record validates against the API schema and is accepted without lossy transformation of metadata And records rejected by the server are left in the buffer with a precise error code and retry policy (e.g., do-not-retry vs retryable) And a rejected record does not block subsequent records from being replayed, and is tracked for later resolution
Backfill Sync Orchestrator
"As an operations manager, I want offline data to auto-sync accurately when staff regain signal so that charts and compliance reports stay complete without manual effort."
Description

Detects network restoration and backfills buffered records to the server in order, preserving original timestamps and provenance. Implements retry with exponential backoff, batching by visit, and idempotent upserts using deterministic record IDs. Validates server acknowledgement before purging local entries. Handles partial failures, resumes interrupted uploads, and provides hooks to update UI counters. Aligns with CarePulse APIs and emits events for charts and schedules to reconcile as data arrives.

Acceptance Criteria
Network Restoration Auto-Sync Trigger
Given the device has one or more buffered records and no active sync, When three consecutive connectivity health checks to the CarePulse API succeed within 10 seconds, Then the orchestrator starts backfill within 3 seconds and emits a sync_started event including visit_ids and buffered_count. Given there are zero buffered records, When connectivity is restored, Then no sync is initiated and no sync_* events are emitted for that device.
Visit-Scoped Batching
Given buffered records exist for multiple visits, When backfill begins, Then records are grouped strictly by visit_id and uploaded visit-by-visit with no cross-visit batches. Given a single visit has more than 100 buffered records, When uploading, Then the orchestrator splits them into sub-batches of at most 100 records each while preserving per-visit order across sub-batches. Given API payload size must be ≤1 MB, When preparing a sub-batch, Then the orchestrator ensures total payload size ≤1 MB by reducing the sub-batch until the limit is met.
Ordered Backfill Preserving Timestamps & Provenance
Given buffered records include captured_at and provenance {source, device_id}, When posted, Then each payload includes the original captured_at and provenance values exactly as captured. Given N records captured for a visit with capture order S by captured_at, When backfilled, Then the server receives them in non-decreasing captured_at order and the resulting chart order equals S. Given EVV check-in/out timestamps were captured offline, When backfill completes, Then EVV timestamps in server reports equal the original values with 0 seconds drift.
Idempotent Upserts with Deterministic Record IDs
Given each record has a deterministic record_id derived from visit_id, type, captured_at, device_id, and payload_digest, When the same record is uploaded multiple times, Then the server stores at most one record and returns 200 with the same record_id and idempotent=true. Given duplicate POSTs occur due to retries, When viewing the chart after sync, Then the number of entries equals the count of unique record_ids uploaded (no duplicates). Given an existing record_id already exists server-side, When upserting, Then captured_at and provenance remain unchanged on the server.
Retry Strategy with Exponential Backoff & Persistence
Given an upload attempt fails with a transient error (network error or HTTP 5xx), When retrying, Then delays follow exponential backoff with jitter: base 1s, multiplier 2, jitter ±20%, capped at 60s, up to 6 attempts per record before marking failed. Given the app is terminated during backoff, When relaunched, Then the orchestrator restores retry state from persisted storage and schedules the next attempt within 5 seconds. Given a non-retriable error (HTTP 400, 401, 403, 404, 409, 422), Then the record is marked failed immediately with error_code and error_message stored and excluded from further retries.
Partial Failures and Interrupted Upload Resume
Given a sub-batch of M records returns acknowledgements for N < M records, When processing results, Then only the N acknowledged records are purged locally and the remaining M−N are queued for retry with incremented attempt_count. Given sync is interrupted mid-request due to app closure or connectivity loss, When resumed, Then no acknowledged record is re-sent and the next request starts from the first unacknowledged record. Given different failure reasons occur per record, When inspecting the local queue, Then each failed record includes last_attempt_at, attempt_count, and last_error_code for auditability.
UI Hooks, Counters, and Reconciliation Events
Given there are buffered records, When sync state changes (started, paused, resumed, completed, failed), Then UI hooks are invoked within 500 ms with updated queued_count, posted_count, and failed_count per visit. Given charts and schedules subscribe to reconciliation, When a record is acknowledged by the server, Then a record_upserted event with visit_id, record_id, captured_at, and source is emitted and subscribed UIs render updates within 1 second. Given all queued records for a visit are acknowledged, When the final ack is processed, Then the UI shows queued_count = 0 for that visit and any offline badge is cleared.
Queue Status & Activity Indicators
"As a caregiver, I want clear status of what’s queued and what’s posted so that I know my documentation is safe and when it’s fully submitted."
Description

Provides real-time offline/online indicators and per-item states (Queued, Syncing, Posted, Error) within visits, charts, and the global activity tray. Shows counts and time since last sync, allows manual retry on failures, and links to error details. Displays a non-blocking banner during offline mode and a confirmation toast when backlog is cleared, reassuring caregivers that entries are preserved. Accessible, low-latency UI that works in constrained devices and aligns with CarePulse design system.

Acceptance Criteria
Offline Mode Banner
Given the device loses internet connectivity for more than 1 second When the app detects the offline state Then a non-blocking banner labeled "Working offline" appears within 1 second and the current screen remains fully interactive And the banner persists until connectivity is restored and remains stable for at least 3 seconds And the banner uses the CarePulse design system banner component and offline icon And a polite live-region announcement communicates the offline status to assistive technologies
Per-Item State Transitions
Given a user records an entry while offline When the entry is saved locally Then the item displays a single state chip labeled "Queued" with the capture timestamp When connectivity is restored and sync begins Then the item state transitions to "Syncing" within 500 ms When the server confirms receipt Then the item state transitions to "Posted" and the original capture timestamp is retained When the server returns a non-2xx response or validation error Then the item state transitions to "Error" with an error icon And state is persisted across app restarts And only one state chip is visible at any time
Global Activity Tray Counts & Last Sync
Given there are items across visits and charts with states Queued, Syncing, Posted, and Error When the global activity tray is opened Then it shows an aggregate count per state that equals the sum of items across all visible contexts And counts update within 1 second of any underlying state change And the tray displays "Time since last successful sync" in mm:ss and updates every 60 seconds When at least one item successfully posts Then the "last successful sync" timer resets to 00:00 When sync attempts fail without any success Then the "last successful sync" timer does not reset
Manual Retry & Error Details
Given an item is in the "Error" state When the caregiver taps Retry while online Then the item transitions to "Syncing" within 500 ms and a new attempt is made with the same payload ID When the retry succeeds Then the item transitions to "Posted" and is removed from the error list When the caregiver taps the Error details link Then a details panel opens showing error code, HTTP status (if applicable), timestamp, endpoint, and correlation ID, all selectable and copyable When the device is offline Then the Retry control is disabled and a tooltip/message indicates "Connect to retry"
Backlog Cleared Confirmation & Data Integrity
Given the queue size is greater than 0 When all queued and syncing items have successfully posted and there are no errors remaining Then a confirmation toast "All entries synced" appears within 1 second and auto-dismisses after 4 seconds And the toast includes the number of items synced in this cycle And a polite live-region announcement communicates the confirmation And in the visit/chart views, posted entries display their original capture timestamps and provenance (Voice, Sensor, Manual) and are ordered chronologically by capture time
Accessibility Compliance
Given the offline banner, state chips, tray counts, retry button, error link, and toast are present Then each has an accessible name/role/state and is reachable via keyboard and switch control And status changes (offline banner shown/hidden, item state changes, confirmation toast) are announced via ARIA live regions without stealing keyboard focus And color contrast for all text and interactive elements meets WCAG 2.1 AA (>= 4.5:1 for normal text) And focus order follows visual order and returns to the invoking control when panels close And all indicators have non-color affordances (icons/text) to convey meaning
Performance on Constrained Devices
Given a low-end device profile (Android Go-class, 2 GB RAM, 1.8 GHz quad-core) and a queue of 100 items When item states change or the activity tray opens Then tap-to-visual state update latency is <= 200 ms at p95 and tray open/refresh is <= 300 ms at p95 And UI thread frame time is <= 16 ms at p95 during indicator updates And offline/online banner appears or dismisses within 1 second of connectivity state change And the indicator subsystem peak additional memory usage is <= 25 MB during bulk sync
EVV Integrity Controls
"As a compliance officer, I want EVV data preserved and verifiable from offline sessions so that audits pass and payers trust our records."
Description

Captures EVV-required artifacts while offline, including accurate UTC timestamps, geolocation snapshots where available, caregiver identity, and device identifiers. Applies monotonic clock safeguards and corrects for drift on sync by reconciling with server time, while preserving original capture time for audit. Signs records with a local integrity hash to detect tampering and validates on server. Ensures audit-ready EVV trails and consistent compliance reporting even without connectivity.

Acceptance Criteria
Offline EVV Capture: UTC Timestamp, Identity, Device ID
Given the caregiver is authenticated and the device is offline When the caregiver records EVV events (visit start, check-in/out, task completion) Then each event is persisted locally within 100 ms of capture with fields: original_capture_time_utc (ISO 8601, ms precision), caregiver_id, device_id, event_type, tz_offset_minutes, app_version, monotonic_sequence (per-visit incrementing) And events are write-acknowledged to durable storage before the UI shows success And all persisted events survive app kill/restart and device reboot
Monotonic Ordering and Clock Drift Reconciliation on Sync
Given pending offline events with original_capture_time_utc and monotonic_sequence exist And a server with authoritative UTC time is reachable When connectivity is restored and sync starts Then the client measures device_server_offset via a time-sync handshake And sends events ordered by monotonic_sequence And the server computes canonical_time_utc using original_capture_time_utc adjusted by device_server_offset And if |canonical_time_utc − original_capture_time_utc| > 120 seconds, set clock_anomaly=true and record drift_ms And the ledger ordering on server follows monotonic_sequence regardless of timestamp drift And original_capture_time_utc is preserved unmodified for audit
Geolocation Snapshot Capture and Fallback Codes
Given location permission is granted and location services are enabled When an EVV event is captured offline Then the app attempts a one-shot location fix up to 5 seconds And if a fix is obtained, store latitude, longitude, accuracy_meters, provider, location_capture_time_utc And if no fix in 5 seconds, store reason_code="NO_FIX_TIMEOUT" and proceed without blocking And if permission denied, store reason_code="PERMISSION_DENIED"; if services off, reason_code="PROVIDER_OFF" And mark location_valid=true only when accuracy_meters ≤ 100; otherwise store location_valid=false and reason_code="LOW_ACCURACY" And on sync, server stores location fields/reason_code exactly as captured for audit
Local Integrity Hashing and Server-Side Validation
Given a device-bound signing key exists in secure storage When an EVV event is persisted offline Then compute and store integrity_hash = HMAC-SHA256 over [original_capture_time_utc, caregiver_id, device_id, event_type, monotonic_sequence, location payload if present] And prevent edits to hashed fields after creation When the event is synced Then the server verifies integrity_hash against the registered device key And if verification fails, return status="Rejected: Integrity Check Failed" and do not include the event in reports And the client marks the item as Error and blocks resubmission until resolved
EVV Audit Trail and Reporting Consistency Post-Sync
Given offline-captured events have been synced and accepted by the server When an admin generates an EVV report for a date range including those visits Then each event row contains: visit_id, caregiver_id, device_id, original_capture_time_utc, canonical_time_utc, server_received_at, drift_ms (0 if none), clock_anomaly flag, location (or reason_code), integrity_hash, verification_result And total visit duration computed from canonical_time_utc matches event intervals within ±1 second And the report validates against the EVV report schema with 0 errors And at least 10% sample of offline events in the report show both original_capture_time_utc and canonical_time_utc preserved
UI Queue Status: Queued, Posted, and Error States
Given the device is offline When the caregiver records EVV events Then the UI displays status="Queued" for the visit within 1 second and shows a queued count And queued items persist through app kill/restart and device reboot When connectivity is restored and server returns 200 with verification_result=Pass Then the UI updates status to "Posted" and shows posted_at (server time) And if any item fails validation (e.g., integrity check), the UI shows status="Error" with a human-readable reason and a retry option And the client retries failed syncs up to 5 times with exponential backoff without creating duplicates
Conflict Resolution & Deduplication
"As a caregiver, I want the app to handle duplicates automatically and only ask me when needed so that my chart stays clean without extra work."
Description

Prevents duplicate or conflicting entries during backfill by using idempotency keys, sequence numbers, and server-side merge strategies. Detects overlaps (e.g., duplicate vitals or check-ins) and applies configurable policies (merge, last-write-wins, or prompt caregiver). Presents lightweight review prompts only when human input is required, minimizing disruption. Guarantees that charts and visit timelines remain accurate after reconnection.

Acceptance Criteria
Duplicate Vital Reading Backfill on Reconnect
Given a caregiver records the same vital reading offline more than once with the same idempotency key or matching timestamp+value within the configured tolerance When the device reconnects and backfill runs Then the server deduplicates using idempotency and deduplication rules, creating a single canonical record And the patient chart contains exactly one vital entry for that timestamp And the client marks all submitted instances as Posted, labeling non-canonical copies as deduplicated And an audit entry is created for each discarded duplicate including reason "duplicate" and source "offline buffer"
Idempotency Key Prevents Double-Post
Given an entry with idempotency key K was previously accepted by the server And network retries cause the same entry K to be submitted again When the server receives the duplicate submission Then the server returns the existing resource without creating a new record And the client shows a single chart entry and a single Posted item for K And no additional notifications or prompts are shown for the duplicate retry
Sequence Ordering Preserved with Gaps
Given buffered events are assigned sequence numbers 101, 102, and 104 with original event timestamps When backfill occurs after reconnection Then the server applies events in ascending sequence order and preserves original event timestamps And the server flags the missing 103 as a gap in the audit trail without reordering or duplicating events And the resulting visit timeline displays events in chronological order with no artificial overlaps introduced by late arrivals
Overlapping EVV Events Resolved by Policy
Given two EVV check-in/check-out events overlap for the same visit after reconnection And a conflict resolution policy is configured When reconciliation executes Then if policy = last-write-wins, the latest by server-received timestamp becomes canonical and the earlier overlapping event is superseded And if policy = merge, overlapping intervals are merged into a single interval with combined provenance And the resulting visit window has no overlaps and remains EVV-compliant And superseded or merged events are logged with the applied policy and reason
Caregiver Merge Prompt for Conflicting Notes
Given two offline note entries conflict on the same field and policy = prompt-caregiver When the device reconnects and reconciliation runs Then the caregiver sees a single lightweight prompt showing both versions and options: Keep A, Keep B, or Merge And choosing an option resolves the conflict and posts the canonical note without reopening the visit And no prompt is shown if the conflict was auto-resolved by policy And if dismissed, the item remains in Needs Review and does not block other postings
Audit Trail and Provenance After Conflict Resolution
Given any conflict (duplicate or overlap) is auto- or manually resolved during backfill When the chart is viewed or an audit report is exported Then each affected entry displays provenance including source (offline/online), device ID, user ID, original and server-received timestamps, applied policy, and resolution action And the audit log records before/after values with links to idempotency keys and sequence numbers And each entry has a stable server ID and version for future idempotent updates
Queued/Posted Status Reflects Conflict Outcomes
Given the offline queue contains items including duplicates and conflicts When backfill and reconciliation complete Then each item transitions to Posted, Discarded-Duplicate, or Needs Review according to outcome And the UI counts and badges match the actual number of items in each state And no item remains Pending beyond one retry cycle without an error message and a retry action And tapping any outcome opens the related chart entry or review screen
Storage Quota & Retention Management
"As a caregiver, I want the app to manage offline storage safely so that my device doesn’t run out of space and my data remains intact."
Description

Enforces local storage limits for buffered PHI, with compression for large payloads (audio, images) and automatic purge of items after confirmed server receipt. Provides early warnings when nearing quota, offers guidance to free space, and preserves the most recent and compliance-critical entries first. Operates within mobile OS background and disk constraints to avoid data loss and app instability.

Acceptance Criteria
Quota Threshold Warnings and Hard Cap Enforcement
Given the app storage quota is set to Q=200 MB and current buffered usage is below 80% of Q When usage first crosses 80% (>=160 MB) Then display a non-blocking in-app warning banner within 2 seconds showing percentage used and remaining MB, with a "Free up space" CTA Given the app storage quota is Q=200 MB When usage first crosses 90% (>=180 MB) Then display a persistent notification and an in-app sticky warning until usage drops below 85%, and log an analytics event "quota_warning_90" Given the app storage quota is Q=200 MB When usage reaches or exceeds 100% (>=200 MB) and a non-critical payload arrives Then reject buffering of the non-critical payload within 1 second with a visible error "Storage limit reached" and do not crash or ANR Given the app storage quota is Q=200 MB When usage reaches or exceeds 100% (>=200 MB) and a compliance-critical payload arrives Then accept the payload by evicting lowest-priority items per eviction policy to remain <= Q, and record the eviction in an audit log with itemIds and freed bytes
Post-Receipt Auto-Purge with Integrity Check
Given a buffered item has been uploaded When the server responds 2xx with a receipt containing itemId and SHA-256 checksum matching the local checksum Then purge the local copy within 5 seconds, update the item status to "Purged", and decrement local storage usage accordingly Given a buffered item has been uploaded When the server responds with non-2xx or checksum mismatch Then do not purge the local copy and schedule retry with exponential backoff starting at 30 seconds, capped at 10 minutes Given items have been marked for purge but the app is backgrounded or restarted When the app regains execution Then complete pending purges within 10 seconds without duplicating uploads and preserve an audit trail of purged itemIds and timestamps
Adaptive Compression for Large Payloads
Given an audio clip larger than 200 KB is queued for buffering When compression is applied Then encode to Opus mono 16 kHz at 16 kbps, ensure duration variance <= 0.1 seconds, and ensure resulting size <= 120 KB per minute of audio; set metadata provenance compressed=true, codec=opus16 Given an image larger than 400 KB or with longest edge > 1280 px is queued for buffering When compression is applied Then resize so the longest edge is <= 1280 px and encode to WebP at quality 75, ensuring resulting size <= 400 KB; set metadata provenance compressed=true, codec=webp_q75 Given compression fails for any payload When failure occurs Then retain the original payload if total usage remains < 95% of quota; otherwise queue the item as compliance-critical without compression and raise a non-blocking alert "Compression failed—prioritizing delivery"
Priority-Based Retention Under Quota Pressure
Given buffered items are tagged with priority {critical|non-critical} and have timestamps When eviction is required to free X MB to stay within quota Then evict in this order until X MB is freed: (1) non-critical, oldest-first, server-confirmed; (2) non-critical, oldest-first, not yet uploaded; (3) critical, oldest-first, server-confirmed; and never evict unsynced critical items Given a mix of critical and non-critical items with varying ages When usage exceeds 95% of quota and a new critical item arrives Then no unsynced critical item from the last 24 hours is evicted before all non-critical items are exhausted; if still over quota, reject new non-critical arrivals with explicit error Given an eviction occurs When it completes Then record an immutable audit entry containing evicted itemIds, sizes freed, reason, and before/after usage, and reflect updated counts in the UI within 2 seconds
Resilient Operation Under OS Background and Disk Constraints
Given the OS reports low disk (free space < 100 MB) When new payloads are buffered Then accept all compliance-critical payloads and reject non-critical ones with a clear message; the app must not crash or ANR and must record 0 unhandled I/O exceptions over a 10-minute soak test Given the app is sent to background with >= 10 MB of purgeable (server-confirmed) items When a background execution window is granted (iOS BGTask or Android WorkManager) Then perform purge and pending uploads, and if preempted by the OS, resume on next foreground without duplicate uploads and without data loss Given the OS issues a low-storage warning/broadcast When it is received Then surface a notification within 5 seconds linking to the in-app Free Space guidance screen
Actionable Free-Space Guidance
Given buffered usage crosses 80% of quota When the user taps "Free up space" Then show a guidance screen within 1 second listing: (a) purgeable synced items with count and total reclaimable MB, (b) pending items size with advice to connect for upload, and (c) a link to OS storage settings Given the user selects "Purge synced items" and confirms When the purge operation runs Then delete only server-confirmed items and reclaim within ±10% of the pre-shown MB estimate, completing within 10 seconds for up to 500 items Given there are no purgeable items When the guidance screen is opened Then disable purge actions and display the reason "Nothing to purge—pending items must upload first"
Offline Sensor & Voice Intake
"As a caregiver, I want to record vitals and voice notes even without signal so that my documentation stays complete during the visit."
Description

Accepts IoT sensor readings and short voice clips offline, tagging each with device/source metadata and exact capture time. Queues audio for server-side transcription and sensor payloads for validation pipelines, with on-device checksums to ensure integrity. Maintains lightweight previews and allows playback while offline. On sync, associates assets with the correct visit note and updates charts automatically.

Acceptance Criteria
Offline Sensor Reading Capture & Queue
Given the device is offline and paired to a sensor When a sensor reading is emitted Then the app records the reading with the sensor-provided timestamp and local device timestamp where the difference is <= 1 second And tags the reading with sensor_id, sensor_model, firmware_version, mobile_device_id, caregiver_user_id, and optional visit_id if selected And captures location method and accuracy in meters when available; if unavailable, marks location_status = unavailable And computes and stores a SHA-256 checksum of the payload And places the payload in the offline queue ordered by capture time And displays a "Queued" status within 1 second in the intake UI And deduplicates any identical reading already queued (matching sensor_id and checksum)
Offline Voice Clip Capture, Preview, and Playback
Given the device is offline When the caregiver records a voice clip up to 60 seconds Then the app stores audio with capture_timestamp, duration, mobile_device_id, caregiver_user_id, and optional visit_id And generates a low-footprint waveform preview within 3 seconds of save And allows offline playback with play/pause/seek; playback starts within 500 ms of tap And computes and stores a SHA-256 checksum for the audio file And shows a "Queued" status with duration and file size And blocks new recordings if remaining device storage < 50 MB and displays a "Storage Low" warning
Automatic Sync and Backfill on Connectivity Restore
Given the device regains connectivity When the app detects network availability Then sync begins within 5 seconds And queued sensor payloads are posted to the validation pipeline and audio files to transcription endpoints And items upload in capture-time order And the server acknowledges with created ids and checksum match before an item is marked "Posted" And the visit chart/note updates within 3 seconds of acknowledgment with exact capture timestamps and provenance And the UI updates item status to "Posted" and removes it from the queue And failures retry up to 3 times with exponential backoff; after final failure the item is marked "Failed" with an actionable error
Correct Association to Visit Note and EVV Accuracy
Given a queued item lacks a visit_id When syncing occurs Then the system auto-matches by time window and caregiver assignment; if multiple candidates exist, it defers posting and prompts for selection on next online session And no item posts without an explicit or inferred visit_id; unmatched items remain with status "Needs Visit" And when posted, the note entry shows the exact capture timestamp (UTC) and original device timezone offset And EVV records retain the capture timestamp within 1 second tolerance of device time
Data Integrity via Checksums and Upload Resume
Given an item is uploaded When the server computes and compares the checksum Then on mismatch the client retries the upload up to 3 times with resumable transfers And after persistent mismatch the item is marked "Failed Integrity" and remains queued And the UI surfaces a non-blocking alert with a retry action And successful uploads record checksum_match = true in a local integrity log
Offline Queue Persistence and Storage Management
Given the app is force-closed or the OS reboots When the app is reopened Then the offline queue is intact and ordered by capture time And uploads resume automatically when connectivity is available And total offline cache size is capped at 500 MB; at 80% capacity a warning is shown And FIFO eviction of oldest unposted audio previews only occurs after explicit user confirmation; sensor readings are never evicted before posting And all queued items are encrypted at rest using the device keychain/keystore
Status Indicators and Accessibility
Given items exist in the offline queue When the caregiver opens the intake/queue screen Then each item shows a status in {Queued, Syncing, Posted, Failed, Needs Visit} And a badge displays the total queued count And statuses update in real time without manual refresh And status text meets WCAG AA contrast and has screen-reader labels And tapping a Failed item reveals an error code and a retry action

PayerFit Engine

A real‑time rules brain that encodes each payer’s plan‑of‑care frequencies, spacing constraints (e.g., not back‑to‑back), and authorization caps by discipline. As schedulers drag or propose visits, it validates instantly and explains the “why” behind any block or warning. Route Orchestrators and RN Case Planners stop guessing and book confidently, preventing over/under frequency and costly authorization breaks.

Requirements

Payer Rules Catalog & Versioning
"As a compliance admin, I want to encode and version payer rules so that schedulers can rely on consistent, up-to-date validations."
Description

A centralized, versioned catalog to encode payer-, plan-, state-, and discipline-specific plan-of-care rules, including weekly frequency targets, minimum/maximum spacing, daily limits, visit/time-based authorization caps, and effective date ranges. Provides an admin UI for creating, testing, and approving rule sets with citations, notes, and source attachments. Supports inheritance and patient-level overrides, JSON/CSV import, and validation-ready normalized schema. Exposes a read-optimized API and change webhooks for downstream caches in the Scheduler and Route Orchestrator. Maintains full change history and rollbacks to ensure traceability and safe updates.

Acceptance Criteria
Admin Creates and Approves a Payer Rule Set with Effective Date Window
Given an admin with RuleCatalog:Write permissions and a blank rule form When they enter payer, plan, state, discipline(s), weekly frequency targets, spacing min/max, daily visit limits, visit/time-based authorization caps, effective_from and effective_to, and at least one citation Then client-side validation prevents save until all required fields are present and correctly typed per schema Given the admin uploads up to 5 source attachments (PDF/PNG) totaling ≤25MB When saving as Draft Then files are virus-scanned, checksummed, stored, and linked; the Draft is created with status=Draft and a generated immutable rule_key Given a Draft with no validation errors When the admin clicks Test and supplies a sample 4-week schedule Then the validator returns pass/fail with specific failed constraint IDs and human-readable reasons within 2 seconds Given a Draft that passes server-side validation When the admin Approves it Then status changes to Approved, version=1 is created, effective dates cannot overlap any other Approved version with the same rule_key, and an approval record (approver, timestamp, rationale) is stored
Version History, Diff, and Rollback of Rule Sets
Given an existing Approved version=1 for a rule_key When a new Draft is created by cloning v1 and modifying spacing and caps Then a pending version=2 (Draft) is recorded with a complete field-level change set relative to v1 Given version=2 is Approved When retrieving history for the rule_key Then the API returns an ordered list of all versions with metadata (version, status, effective_from/to, author, approver, rationale) and a diff summary of changed fields Given versions v1 (Superseded) and v2 (Approved) exist When an admin performs Rollback to v1 Then v2 status becomes Superseded, v1 status becomes Approved, a rollback event with rationale is recorded, and selection by as_of date returns v1 Given any attempt to delete a version When the request is made Then hard delete is blocked; only status transitions per policy are allowed to preserve auditability
Rule Inheritance and Patient-Level Override Resolution
Given a payer-level default, a plan-level override, a state-level override, and a discipline-level override exist for the same rule_key lineage When resolving rules for a patient in the specified state and discipline on a date within effective ranges Then the engine returns a resolved rule where unspecified fields inherit from the nearest ancestor and includes provenance per field (source_level and version) Given a patient-level override with only spacing_min set and an end date earlier than the parent plan's end When resolving on a date after the patient override end Then spacing_min reverts to the parent value and provenance reflects the parent source Given conflicting values between plan-level and state-level overrides When resolution occurs Then the precedence order Patient > Plan > State > Payer is applied consistently and deterministically Given an override attempts to broaden effective dates beyond its parent When saving Then validation fails with an error indicating child effective range must be within parent range
JSON/CSV Import with Schema Validation and Idempotency
Given an admin uploads a CSV with N rows of mixed valid/invalid rule definitions When running Dry Run validate Then the system reports zero writes, per-row errors with codes and line numbers, and a summary (rows_valid, rows_invalid) within 10 seconds for N ≤ 10,000 Given a JSON import payload where some rows match existing rules by natural key (payer, plan, state, discipline, effective_from) When committing with reject_on_error=false and idempotency_key provided Then valid new rows are inserted, matching rows are updated only if payload hash differs, duplicates without changes are skipped, and the operation is safely retryable without double writes Given import rows reference attachment URLs (HTTPS) When committing Then the system fetches, virus-scans, stores with checksum, links to the rule, or records a row-level error if fetch/scan fails Given a payload that violates the normalized schema (e.g., non-integer daily_limit) When validating Then the import is rejected with HTTP 400 and a machine-readable schema error list
Read-Optimized API Serves Effective Rules by Date and Discipline
Given an Approved rule exists for payer, plan, state, discipline and a requested as_of date within its effective range When a client calls GET /rules?payer=...&plan=...&state=...&discipline=...&as_of=YYYY-MM-DD Then the API returns 200 with the normalized rule document, including version, effective_from/to, and field provenance; P95 latency ≤ 150 ms under 50 RPS Given a client includes If-None-Match with a stored ETag for the same query When the underlying rule has not changed Then the API returns 304 Not Modified with no body Given no effective rule exists for the query When the API is called Then the API returns 404 with error_code=RULE_NOT_FOUND Given a query that would match multiple versions due to overlapping dates When the API is called Then the API returns 409 with error_code=DATE_OVERLAP and the conflicting version IDs
Change Webhooks Notify Scheduler and Route Orchestrator
Given one or more webhook endpoints are registered and verified for the Scheduler and Route Orchestrator When a rule is Approved, Superseded, Rolled Back, or its effective dates change Then a signed HMAC-SHA256 webhook is POSTed within 2 seconds containing event_type, rule_key, affected_versions, diff_summary, and cache_invalidation_keys Given a webhook delivery returns a non-2xx response When retrying Then exponential backoff retries occur for up to 24 hours with jitter, idempotency via event_id, and eventual DLQ logging after max attempts Given a recipient wants to validate the event When verifying Then the signature matches the shared secret, the timestamp is within a 5-minute skew, and the payload hash matches the signature Given a webhook is successfully delivered When the recipient calls back the Read API with the provided cache_invalidation_keys Then the recipient can refresh only the affected rule entries
Real-time Validation with Explanations
"As a scheduler, I want instant validation with clear explanations so that I can place compliant visits without guesswork."
Description

Low-latency validation invoked on drag, drop, or propose actions in the Scheduler, returning allow/warn/block decisions within a 100 ms p95 budget. Responses include human-readable explanations, rule references, and reason codes suitable for UI tooltips and inline badges. Supports batch validation for multi-visit changes, partial-day constraints, and cross-discipline checks. Degrades gracefully with local rule caching and eventual sync when offline or degraded network. Internationalization-ready message templates and accessibility-compliant annunciation of warnings and blocks.

Acceptance Criteria
Instant decision on drag/drop/propose (p95 ≤ 100 ms)
Given a connected Scheduler and a single visit is dragged, dropped, or proposed When validation is invoked client-side Then an allow, warn, or block decision is received by the client in ≤ 100 ms at p95 and ≤ 200 ms at p99 measured over 1,000 consecutive operations And the client-side timeout is 500 ms and ≤ 0.5% of operations hit the timeout And the validation error rate (HTTP 5xx or malformed payload) is ≤ 0.1%
Decision payload includes explanation, rule reference, and reason code
Given any validation decision response Then the payload includes: decision ∈ {allow,warn,block}, reasonCode (stable, documented), ruleRef {payerId, planId, ruleId, version}, messageKey, localizedMessage, variables (object) And for warn or block decisions, localizedMessage length is 1–240 characters and contains no PHI And UI renders an inline badge and a tooltip using localizedMessage and reasonCode for warn/block within 50 ms of receipt And when multiple rules trigger, details[] includes up to 3 reasons sorted by severity then recency, each with unique reasonCode
Batch validation for multi-visit changes
Given a batch proposal of up to 20 visits across a single patient and week and a valid session When batch validation is invoked Then the response includes correlationId, overallDecision, and items[] with one entry per proposed visit containing decision, reasonCode(s), ruleRef(s), and localizedMessage(s) And cross-visit constraints (spacing, weekly caps) are applied across the entire batch deterministically And p95 end-to-end client-observed latency is ≤ 200 ms for batch size ≤ 20; p99 ≤ 400 ms And the results are stable (idempotent) when the same batch payload is submitted twice within 60 seconds (identical outputs)
Partial-day spacing and cross-discipline enforcement
Given payer plan rules that include a minimum same-day spacing of 2 hours and a “not back-to-back” constraint, plus weekly caps of PT≤2 and OT≤1 When a scheduler proposes two same-day PT visits 1 hour apart Then the response for the second visit is block with reasonCode=min_spacing_violation and a ruleRef pointing to the spacing rule And the localizedMessage identifies the conflicting visit by date/time and required interval When the scheduler proposes PT and OT visits that would exceed the weekly cap (PT 3rd visit or OT 2nd visit) Then the response for the overage visit is warn or block per encoded rule severity with reasonCode=cap_exceeded and variables including currentCount and cap
Graceful degradation offline with local cache and eventual sync
Given the device is offline or network RTT > 800 ms sustained for ≥ 5 seconds When validation is invoked Then the system uses a signed local rules cache if its age ≤ 24 hours and returns a provisional decision (allow/warn/block) within 100 ms p95 And the UI displays a “Provisional” badge and a persistent banner indicating offline validation And upon reconnection, all provisional decisions from the last 24 hours are revalidated within 60 seconds And if any decision changes (e.g., allow→block), the affected visits are highlighted and the user is notified within 5 seconds with a required acknowledgment before publish And if the local cache age > 24 hours, validation is marked unavailable; user may save as draft but cannot publish until successful sync
Internationalization-ready decision messages
Given the user locale is set to es-US When a warn or block decision is returned Then localizedMessage and tooltip/badge text render in Spanish using ICU message templates with correct pluralization and inserted variables And if a translation is missing for messageKey, the system falls back to en-US while keeping the same reasonCode and ruleRef And 100% of decision messages have messageKey coverage for en-US and es-US in the i18n catalog, validated at build time And pseudo-localization (string length +30%) yields no clipping or overflow in tooltips or badges
Accessibility-compliant annunciation of warnings and blocks
Given a screen-reader user navigating via keyboard only When a warn decision is received Then the tooltip/badge is reachable via keyboard and announced via an aria-live="polite" region within 1 second When a block decision is received Then the announcement occurs via aria-live="assertive" within 1 second and focus moves to the first offending visit element without loss of context And warning and block badges meet WCAG 2.1 AA contrast (≥ 4.5:1), and all validation UI passes automated axe-core checks with 0 critical violations
Authorization Cap Tracking & Decrementing
"As an operations manager, I want real-time authorization balances so that we avoid caps and prevent denials."
Description

Real-time tracking of authorization balances per patient, payer, episode, and discipline, supporting both per-visit and time-based units (e.g., 45 minutes, 1 hour). Automatically decrements on tentative hold and final booking, restores on cancellation/no-show resolutions based on agency policy, and accounts for documented split visits. Configurable soft thresholds (warnings) and hard stops (blocks) with override workflows and audit notes. Displays remaining units in the Scheduler and exposes balances via API for Route Orchestrator optimization.

Acceptance Criteria
Per-Visit Units: Tentative Hold and Final Booking Decrement with Restoration on Cancellation
Given a patient with an authorization of 12 visits for discipline PT in episode E1 and a remaining balance of 5 visits When a scheduler places a tentative hold for a PT visit in episode E1 Then the remaining balance decreases to 4 visits immediately and is visible in the Scheduler and via API Given the held visit is finalized (booked) Then no additional decrement occurs (balance remains 4) Given the held visit is canceled before service and policy restore_on_cancel = 100% Then the remaining balance increases back to 5 and an audit record captures user, timestamp, policy, and reason Given policy restore_on_cancel = 0% Then the remaining balance does not increase and an audit record captures policy applied
Time-Based Units: Decrement by Duration and Split Visit Accounting
Given authorization units are time-based with unit_length = 60 minutes, rounding_rule = 15 minutes, authorized_units = 20 hours for OT in episode E1, remaining_units = 6.0 hours When a 45-minute OT visit is placed on tentative hold Then remaining_units decrease by 0.75 hours (to 5.25) using the rounding_rule and is reflected in UI and API Given the visit is split into two segments of 20 and 25 minutes under the same visit Then the total decrement equals 45 minutes (0.75 hours) with no double-counting Given one split segment (20 minutes) is canceled and policy restore_on_partial_cancel = 100% Then remaining_units increase by 0.333... hours (20/60 rounded per rule) and an audit record details the segment restored Given the tentative hold is converted to final booking Then no additional decrement occurs
Soft Threshold Warnings and Hard Stop Blocks with Explain-Why and Override
Given payer X has soft_threshold = 2 remaining units and hard_stop = 0 for discipline SN When proposing a visit that would reduce remaining units from 3 to 2 Then a non-blocking warning appears in real time explaining that the soft threshold will be reached; saving is allowed When proposing a visit that would reduce remaining units from 1 to 0 Then a blocking error prevents saving, an explain-why message cites the payer rule and current balance, and the API returns HTTP 409 with error_code = AUTH_HARD_STOP Given a user with override_permission = true and allow_negative_balances = true When the user selects Override on the hard stop and enters an audit note (min 10 characters) Then the booking saves, remaining units may go negative, and an audit record stores user, timestamp, previous balance, new balance, policy, and note Given allow_negative_balances = false When attempting the same override Then the override option is disabled or fails with HTTP 403 and no booking is created
Scheduler Shows Real-Time Remaining Units per Patient/Payer/Episode/Discipline
Given a scheduler opens the patient's board for episode E1 Then a balance badge per discipline displays unit_type (visits or hours), authorized, consumed, and remaining When the scheduler drags a tentative visit onto the calendar or adjusts its duration Then the displayed remaining updates within 500 ms to reflect the tentative decrement When the tentative visit is moved to a different discipline or episode Then the original balance is restored and the target combination's balance is decremented accordingly When the tentative visit is removed Then the balance reverts per policy and never shows negative unless allow_negative_balances = true
Balances API Provides Up-to-Date Balances for Route Orchestrator
Given a client calls GET /v1/authorization-balances?patient_id=P&episode_id=E Then the response includes entries with fields: patient_id, episode_id, payer_id, discipline, unit_type, authorized_units, consumed_units, remaining_units, last_updated_at (UTC) And the data reflects any tentative holds or bookings made within the last second (SLA <= 1s) When a hold is created, updated, or canceled Then a webhook auth.balance.changed is emitted within 1s containing the balance diff and identifiers And access is secured via OAuth scope auth.balances:read and webhook signatures are verified
Balances Tracked Separately by Patient, Payer, Episode, and Discipline
Given patient P has episodes E1 and E2 and payers A and B with PT and OT authorizations When a PT visit is held under (P, A, E1) Then only the (P, A, E1, PT) balance decrements And balances for (P, A, E2, PT), (P, B, E1, PT), and (P, A, E1, OT) remain unchanged When the same visit is reassigned to episode E2 or payer B Then the original combination's balance is restored and the new combination's balance is decremented
No-Show Resolution Applies Configured Restoration Percentages
Given policy restore_on_no_show_time_based = 50% and restore_on_no_show_per_visit = 0% When a booked 60-minute visit is marked No-Show with resolution "Patient Not Home" Then remaining time-based units increase by 30 minutes and an audit entry records resolution, policy applied, and acting user When a per-visit authorization no-show is recorded under the same policy Then no restoration occurs and an audit entry records policy applied Given a historical no-show is edited Then the restoration recalculates using the policy version effective on the service date and updates the audit trail
Episode & Frequency Compliance Engine
"As an RN case planner, I want weekly frequency compliance feedback so that the plan of care is met across the episode."
Description

An episode-aware engine that models 30/60-day (or custom) plan-of-care periods with weekly frequency targets and permissible carryover logic. Computes compliance status for each week and episode in real time as visits are placed, highlighting under- or over-servicing against the plan. Supports recertification rollovers, mid-episode plan changes with effective dates, and payer-specific week boundary definitions (calendar week vs rolling). Provides summaries to RN Case Planners and feeds compliance KPIs to reports.

Acceptance Criteria
Real-time Weekly Compliance Calculation During Scheduling
Given an active episode (start=2025-09-01, end=2025-10-30) with plan targets: SN=2/week, PT=1/week, and current week 2025-09-01–2025-09-07 When a scheduler places an SN visit on 2025-09-02 09:00 Then the SN weekly compliance indicator updates within 300 ms to Delivered 1/2 with status Under, and the episode SN tally reflects 1 delivered in the current week When a second SN visit is placed in the same week Then the SN weekly compliance updates within 300 ms to Delivered 2/2 with status On Plan When a third SN visit is placed in the same week Then the SN weekly compliance updates within 300 ms to Delivered 3/2 with status Over
Configurable Carryover Application Within Episode
Given carryover policy configured: max_carryover_per_week=1, window=next_week_only, applies_to=SN; week1 (2025-09-01–2025-09-07) target SN=2 and delivered SN=1 When week2 (2025-09-08–2025-09-14) begins Then week2 SN target auto-adjusts to 3 and week1 status remains Under When 3 SN visits are delivered in week2 Then week2 status is On Plan and remaining carryover capacity=0, and no carryover is applied to week3 When week1 delivered=0 (deficit 2) Then only 1 is carried into week2 per policy and 1 remains Under for week1 and does not roll to week3
Payer Week Boundary Definitions: Calendar vs Rolling
Given payer=Aetna with boundary_rule=calendar_week (Mon–Sun) and episode start=Wed 2025-09-03 When visits occur Fri 2025-09-05 and Mon 2025-09-08 Then the first is counted in week starting Mon 2025-09-01 and the second in week starting Mon 2025-09-08, and weekly compliance is calculated accordingly Given payer=BlueCross with boundary_rule=rolling_7_days anchored to episode_start When visits occur on day 1 (2025-09-03) and day 8 (2025-09-10) Then they are counted in separate rolling weeks, and the UI displays the boundary label "Rolling 7-day" and the grouping updates within 500 ms after payer selection
Mid-Episode Plan Change With Effective Date
Given episode 2025-09-01..2025-10-30 with SN target=2/week through 2025-09-19 and an approved plan change effective 2025-09-20 setting SN target=3/week When computing compliance for week 2025-09-15–2025-09-21 Then days 09/15–09/19 use target 2/week and days 09/20–09/21 use target 3/week with a clearly attributed split When computing week 2025-09-22–2025-09-28 Then target=3/week, episode-to-date compliance snapshots before 2025-09-20 remain unchanged, and a visible "Plan changed 2025-09-20" marker appears on affected weeks
Recertification Rollover at Episode Boundary
Given episode1 (2025-09-01–2025-10-30) completes and episode2 (2025-10-31–2025-12-29) is recertified with identical targets When a visit is scheduled on 2025-10-31 Then weekly and episode counters reset to zero, no deficits or surpluses from episode1 are carried into episode2, episode1 compliance summary is locked read-only, and a "Previous Episode" link exposes the prior summary
Compliance Summary and KPI Feed to Reports
Given a user with role RN Case Planner opens a patient’s episode dashboard When compliance is computed Then the summary displays per discipline: target/week, delivered/week, variance (+/-), carryover_applied, weekly status (On Plan/Under/Over), and episode status; and under/over-servicing weeks are highlighted with color codes (Under=amber, Over=red, On Plan=green) Given the reporting pipeline subscribes to compliance events When a scheduling edit changes weekly compliance Then an event is published within 60 seconds containing org_id, patient_id, episode_id, week_id, payer_id, boundary_rule, discipline, target, delivered, variance, carryover_applied, status, event_timestamp, and the nightly aggregate report includes the updated metrics for that day
Spacing Constraints Enforcement
"As a scheduler, I want spacing rules enforced automatically so that I don’t accidentally book noncompliant patterns."
Description

Enforcement of spacing constraints, including minimum and maximum days between visits, prohibition of back-to-back days, daily visit caps, and rest-day rules, scoped per discipline and payer. Considers weekends, holidays, and time-zone/daylight-saving transitions to avoid accidental violations. Handles companion rules such as required separation between disciplines or required sequencing (e.g., PT before OT start). Allows documented clinical overrides with reason capture and visibility to downstream reports.

Acceptance Criteria
Minimum/Maximum Spacing Enforcement (Local Time, DST-Aware)
Given member M in timezone America/New_York with payer P rule for PT: minDaysBetween=2 and maxDaysBetween=10, and a completed PT visit V1 on 2025-03-08 10:00 local When a scheduler proposes PT visit V2 on 2025-03-09 at any local time Then the system blocks V2 with code SPACING_MIN_DAYS and message "Requires at least 2 full calendar days between PT visits (last: 2025-03-08)" And when a scheduler proposes PT visit V3 on 2025-03-10 09:00 local Then the system allows V3 without warning And when a scheduler proposes PT visit V4 on 2025-03-19 09:00 local Then the system blocks V4 with code SPACING_MAX_DAYS and message "Exceeds maximum 10 days between PT visits from 2025-03-08)" And spacing is computed using the member's local calendar days with DST start on 2025-03-09 correctly handled (no off-by-one due to 23-hour day) And the decision payload includes: ruleId, payerId, discipline=PT, lastVisitId=V1, proposedVisitId, daysBetween, decision=BLOCK/ALLOW, timestamp
No Back-to-Back Days Enforcement (Same Discipline)
Given payer P forbids back-to-back days for OT and member M has an OT visit on 2025-07-01 local When a scheduler attempts to place another OT visit on 2025-07-02 local Then the system blocks with code NO_BACK_TO_BACK and an explanation referencing the conflicting 2025-07-01 visit And when the scheduler proposes 2025-07-03 local Then the system allows without warning And the UI returns the validation within 150 ms and annotates the blocked tile with the same code and message And batch auto-scheduling returns a structured error for the 2025-07-02 attempt while continuing with allowed placements
Daily Visit Cap per Discipline and Time Zone Boundaries
Given payer P caps PT at 2 visits per calendar day in the member's local time zone America/Chicago And member M already has 2 PT visits scheduled on 2025-10-15 local When a scheduler proposes a 3rd PT visit on 2025-10-15 at any time local Then the system blocks with code DAILY_CAP and message "Daily cap of 2 PT visits reached for 2025-10-15" And when a scheduler proposes another PT visit on 2025-10-16 00:05 local Then the system allows it And proposals from caregivers in other time zones are evaluated against the member's local calendar day And the decision payload includes countForDay=2 and cap=2
Rest-Day Rule After Consecutive Visits
Given payer P imposes: after 3 consecutive SN visits, require 2 rest days (no SN) before the next SN And member M has SN visits scheduled on 2025-04-01, 2025-04-02, and 2025-04-03 local When a scheduler proposes an SN visit on 2025-04-04 local Then the system blocks with code REST_DAY and message "Requires 2 rest days after 3 consecutive SN visits" And when a scheduler proposes an SN visit on 2025-04-05 local Then the system blocks with code REST_DAY And when a scheduler proposes an SN visit on 2025-04-06 local Then the system allows it And consecutive-count includes scheduled and completed visits, excludes canceled and no-shows, and is caregiver-agnostic
Cross-Discipline Sequencing and Separation (PT Before OT)
Given payer P requires PT initial evaluation before any OT visit and a minimum 1 full day separation after the PT evaluation And PT evaluation Vpt is marked completed on 2025-08-05 14:00 local When a scheduler attempts to schedule an OT visit on 2025-08-05 any time local Then the system blocks with code SEQUENCE_SEPARATION and message "OT cannot occur on the same day as PT evaluation; minimum separation: 1 day" And when a scheduler attempts to schedule OT on 2025-08-04 local (before PT eval completed) Then the system blocks with code SEQUENCE_PREREQ and a message referencing the missing prerequisite And when a scheduler schedules OT on 2025-08-06 local Then the system allows it and records the dependency to Vpt
Clinical Override with Reason Capture and Audit/Reporting
Given user U with role RN Case Planner has override permission for spacing violations When U overrides a blocked NO_BACK_TO_BACK rule for OT on 2025-12-25 by entering a reason of at least 10 characters, selecting a reason category, and optionally attaching documentation Then the system saves the visit with overrideFlag=true and persists override details: reasonText, categoryId, attachmentIds, userId, timestamp, ruleCode, priorDecision=BLOCK And the UI displays a persistent warning badge on the visit indicating an override was applied And the override appears in audit logs and downstream compliance reports with fields: memberId, payerId, discipline, ruleCode, overrideReason, overriddenBy, overriddenAt And if reasonText is missing or under 10 characters, the override cannot be saved and the visit remains blocked And override events are immutable (new edits create new events; originals remain intact)
Simulation & What-If Scheduling
"As a route orchestrator, I want to simulate schedules and receive suggestions so that I can finalize compliant routes efficiently."
Description

A simulation service that validates proposed multi-week schedules and route drafts before committing, evaluating frequency attainment, spacing rules, and authorization consumption in aggregate. Supports what-if scenarios, comparing alternatives with conflict counts and projected authorization exhaust dates. Offers APIs for Route Orchestrator to request auto-adjust suggestions (e.g., shift by ±1 day) to achieve compliance with minimal disruption. Presents scenario diffs and guidance in the Planner UI.

Acceptance Criteria
Multi‑Week Schedule Simulation Summary
Given a draft schedule spanning 4 weeks with up to 200 visits across ≥2 disciplines and ≥1 payer When the user triggers a simulation from the Planner UI or API Then a results payload is returned within 3 seconds at p95 And the payload includes totals: totalVisits, compliantVisits, warningCount, blockingCount And violationCounts are broken down by type: frequency_shortfall, frequency_excess, spacing_breach, authorization_overage And projectedAuthorization includes per payer+discipline remainingUnits and projectedExhaustDate And overallStatus is one of {OK, Warnings, Blocked} where Blocked iff blockingCount > 0; Warnings iff blockingCount = 0 and warningCount > 0; OK iff both are 0 And rerunning the simulation with identical inputs yields identical aggregated metrics and conflict set
What‑If Scenario Comparison and Ranking
Given ≥2 named scenarios derived from the same baseline version When the user opens the Compare view Then each scenario displays metrics: blockingCount, warningCount, visitsMoved, visitsAdded, visitsCanceled, totalShiftMagnitudeDays, projectedAuthorizationExhaustDate per payer/discipline And scenarios are ranked by ascending blockingCount, then ascending warningCount, then ascending visitsMoved, then ascending totalShiftMagnitudeDays, then by latest (furthest) projectedAuthorizationExhaustDate And selecting a scenario marks it Active and loads it into the Planner without committing to the live schedule And exporting comparison results produces a CSV containing the displayed metrics with matching counts
Auto‑Adjust Suggestions API (±1 Day, Minimal Disruption)
Given a scenario with ≥1 blocking conflict When the client calls POST /simulation/{scenarioId}/suggestions with body {maxShiftDays: 1, maxMoves: 10, lockDates: [], lockVisits: []} Then the API responds within 2 seconds at p95 with between 1 and 5 ordered suggestions And each suggestion resolves ≥90% of blocking conflicts and introduces 0 new blocking conflicts And no visit is moved by more than ±1 day and locked dates/visits remain unchanged And each suggestion includes changes[], summary {blockingCount, warningCount, visitsMoved, totalShiftMagnitudeDays}, score (lower is better), and rationale And suggestions are ordered by ascending visitsMoved, then ascending totalShiftMagnitudeDays, then ascending warningCount
Conflict Explanations Include Rule Reference and Guidance
Given a simulation that returns conflicts When the user inspects any conflict item Then the conflict includes fields: ruleId, ruleType ∈ {frequency, spacing, authorization}, severity ∈ {Warning, Block}, visitIds[], dates[], discipline, payerId, explanationText (≤240 chars) And a Why link opens the rule detail showing rule name and text snippet And guidanceText proposes at least one actionable adjustment (e.g., move date within window or reduce frequency) And rerunning the same simulation with unchanged inputs yields the same ruleId and explanationText for the conflict
Planner UI Scenario Diff Visualization
Given baseline scenario A and alternative scenario B When the user opens the Diff view Then moved visits display a delta badge showing ±days and original→new dates And added visits are labeled Added and canceled visits labeled Removed And diff totals (moved/added/removed) exactly match the scenario metrics And the user can filter the diff by discipline, payer, severity, and date range And hovering or focusing a diff item reveals associated conflict(s) and ruleId(s) And the diff view supports keyboard navigation and ARIA labels for screen readers
Commit or Discard Scenario with Guardrails and Audit
Given an active scenario When the user attempts to Commit Then commit is allowed only if blockingCount = 0; if warningCount > 0 a confirmation is required And committing writes all changes to the live schedule atomically, returns 200 with commitId, and records an audit log with userId, scenarioId, counts, timestamp And if the baseline version is stale the commit fails with 409 and includes current baselineVersion And Discard removes the scenario without changes to the live schedule And commit/discard operations are idempotent via version or ETag checks
Audit Trail & Exportable Compliance Reports
"As a compliance auditor, I want exportable decision logs so that I can prove compliance during payer audits."
Description

Immutable audit logging of all validation decisions and overrides, including inputs (patient, discipline, proposed slot), rule set version, decision outcome, rationale, user, timestamp, and before/after authorization balances. Provides one-click, audit-ready exports per patient, episode, and date range in CSV and JSON, with embedded rule citations and timestamps. Integrates with CarePulse reporting to surface payer-specific compliance metrics and retains records per configurable retention policies.

Acceptance Criteria
Real-time validation audit log entry
Given a scheduler proposes or adjusts a visit slot that triggers PayerFit validation When the engine returns a decision (allow | warn | block) Then an audit record is appended within 200 ms capturing: patient_id, episode_id, payer_id, discipline, visit_start_datetime (with timezone), visit_end_datetime (with timezone), user_id, user_role, rule_set_version, decision_outcome, rationale_code, rationale_message, auth_balance_before (units and visits), auth_balance_after (units and visits), and server_timestamp (ISO 8601 UTC) And the record contains a unique immutable audit_id and a correlation_id linking records from the same scheduling session
Override decision audit entry
Given a user with override permission attempts to book a visit that has a warn or block decision When the user submits an override Then an override audit record is appended linking to the original decision via parent_audit_id and correlation_id And the record includes: override_reason_code or free_text_reason (>=10 characters), user_id, user_role, timestamp (UTC), original_decision, overridden_decision, auth_balance_delta, and any system tasks created And if the user lacks override permission, the override is rejected with HTTP 403 and a denied_attempt audit record is appended without changing any balances
Immutability and tamper detection
Given an existing audit record When any client attempts to update or delete the record via API or UI Then the operation is denied with HTTP 403 and no data mutation occurs And audit storage is append-only with prev_hash and record_hash for each record; recomputed record_hash of an exported record matches the stored value And an integrity check endpoint returns status: "Pass" when the hash chain is intact and emits an alert event within 60 seconds if inconsistency is detected
One-click CSV/JSON export by patient/episode/date range
Given a user initiates a one-click export scoped by patient, episode, or date range When the export job runs Then the system generates both CSV and JSON artifacts with identical schema/field order and equal record counts And only records within the selected scope are included; date range bounds are inclusive [start, end] in UTC And for up to 50,000 records, the export completes within 30 seconds; larger exports stream in chunks with progress and complete without timeouts And the user receives a secure, time-bound download URL that expires in 24 hours and filenames include entity, range, and generation timestamp
Embedded rule citations and traceability
Given a decision is evaluated against payer rules When the audit record is created or exported Then the record includes rule_set_version, matched_rule_ids, and payer_citations (payer, plan, section, clause) And querying matched_rule_ids returns the rule text snapshot as of rule_set_version
CarePulse reporting integration: payer-specific compliance metrics
Given an authorized reporting user opens the Compliance dashboard When selecting a date range and optional filters (payer, discipline, branch, case planner) Then widgets display metrics per payer and discipline: blocks_prevented, warnings_issued, overrides_applied, on_time_visit_rate, auth_cap_breaches_avoided, avg_auth_balance_at_schedule And applying or clearing filters updates metrics within 2 seconds P95 And clicking any metric drills down to the underlying audit records with matching filters
Retention policy enforcement and legal holds
Given a tenant retention policy is configured (e.g., 7 years) and optional legal holds exist When the nightly retention job runs Then records older than the retention period and not under legal hold are purged or archived per policy, and a purge summary audit entry is written with counts, date range, actor, and timestamp And export requests for purged periods return zero records with a clear message "No data within retention window" And records under legal hold are retained and exportable until the hold is released

Quota Meter

Always‑on counters that show used vs. remaining visits per authorization window (week/month/episode), with color cues and a “days left to stay compliant” ticker. Updates live as you reschedule and supports multi‑discipline views. Users see risk before it becomes a denial, cutting rework and last‑minute scrambles.

Requirements

Real-time Quota Computation Engine
"As an operations manager, I want accurate used vs. remaining visit counts that update instantly with scheduling changes so that I can prevent over- or under-utilization and avoid claim denials."
Description

Implements a performant service that calculates used vs. remaining visits per authorization window (week/month/episode) and per payer rules, updating instantly as visits are scheduled, rescheduled, canceled, or documented. Supports configurable window boundaries, time zones, overlapping authorizations, and episode-of-care logic. Applies inclusion/exclusion rules by visit status and type, partial-unit rounding (e.g., 15-minute units), and backdated documentation. Provides incremental recalculation and caching to keep UI counters always-on and responsive, emits events for UI components, and reconciles retroactive payer or authorization changes. Integrates with CarePulse scheduling, visit notes, and device/voice-driven completion signals to determine countable utilization.

Acceptance Criteria
Instant Counter Update on Schedule Changes
Given a visit is created, rescheduled (including across windows), canceled, or its status changes between countable and non-countable states When the change is persisted Then the engine recalculates only impacted authorization-window counters and updates the cache within 500 ms (p95 750 ms) And emits exactly one quota.counter.updated event per affected counter with idempotencyKey, version, authorizationId, payerId, patientId, windowStart, windowEnd, usedUnits, remainingUnits, discipline (or all), riskColor, daysLeft, eventTimestamp And adjustments for a reschedule across windows decrement the old window and increment the new window atomically And cache and read-model reflect the new values immediately after event emission
Configurable Window Boundaries and Time Zones
Given an authorization configured with windowType (week, month, episode), boundary rules (e.g., week starts Monday 00:00), and a timeZone When visits occur at boundary edges, including during DST transitions Then visit units are assigned to windows using the authorization timeZone And visits starting exactly at the boundary are counted in the new window And episode-of-care windows open and close according to the configured admission/discharge rules And monthly windows respect varying month lengths and configured end-of-month handling
Overlapping Authorizations Allocation per Payer Rules
Given multiple active authorizations overlap for the same patient/discipline/timeframe and an allocation strategy is configured (e.g., oldest-first, highest-remaining-first) When visits are counted Then each unit is allocated to exactly one authorization according to the strategy, with no double-counting And when an authorization exhausts units, overflow allocation proceeds to the next eligible authorization And retroactive add/remove/change of an authorization triggers deterministic reallocation and emits correction events for all impacted counters
Inclusion/Exclusion by Status and Type with Partial-Unit Rounding
Given a configurable inclusion map for visitStatus and visitType and unitLength=15 minutes with roundingMode=ceil When a qualifying visit has 22 minutes documented Then the engine counts 2 units for that visit And canceled or no-show visits are excluded unless explicitly included by configuration And visits spanning multiple windows are split and units allocated to each window based on time-in-window And configured min/max units per visit are enforced, if present
Backdated Documentation and Retroactive Payer/Authorization Changes
Given a visit is documented or edited with a past service date, or a payer/authorization attribute change is made that affects historical windows When the change is saved Then the engine recomputes all affected windows/authorizations, updates cache, and emits correction events within 1 second (p95 2 seconds) And an immutable audit record is captured with before/after counts, reason, actor, effectiveDate, and correlationId And counters never become negative; any inconsistency produces a reconciliation error and error event
Multi-Discipline Counters and Aggregations
Given visits across multiple disciplines (e.g., RN, PT, OT) and applicable discipline mappings per payer When utilization is computed Then per-discipline counters and an overall total are produced for each authorization window And the total equals the sum of its discipline counters for the same scope And events expose a discipline dimension (specific or all) to support filtered UI views
Compliance Ticker and Risk Color Derivation
Given remainingUnits R, daysLeft D to window end in the authorization timeZone, and configured risk thresholds/target cadence When schedule changes alter R or D Then daysLeft is computed based on configuration (inclusive/exclusive end date, weekend/holiday handling) And riskColor (green/amber/red) is derived from thresholds and included in event payloads And ticker and riskColor update immediately when counters or schedule change
Multi-Discipline & Payer Rule Support
"As a scheduler, I want to view and filter quotas by discipline and payer so that I can allocate visits correctly and comply with contract rules."
Description

Adds configuration and logic to compute quotas by discipline (e.g., PT, OT, ST, RN) and combined, honoring payer-specific aggregation, unit accounting (visits vs. minutes/units), and discipline inclusion/exclusion rules. Handles multiple concurrent authorizations, cross-coverage scenarios, and discipline mapping from agency codes to standardized disciplines. Provides UI filters and toggles to view per-discipline or rolled-up quotas and ensures parity between displayed counts and billing rules. Ships with editable payer templates and validation to prevent misconfigured rules.

Acceptance Criteria
Discipline Mapping from Agency Codes
Given a maintained mapping table from agency codes to standardized disciplines (PT/OT/ST/RN), when a visit is created, imported, or edited, then the visit is tagged with the correct standardized discipline according to the current mapping. Given a visit contains an unmapped or ambiguous agency code, when quotas are computed, then the visit is excluded from quota totals, an error badge "Unmapped discipline" is shown on the visit and Quota Meter, and an admin task is generated to complete the mapping. Given a mapping entry is updated, when the update is saved, then affected quota totals recalculate and the Quota Meter reflects the new counts within 5 seconds.
Payer Rule Templates: Inclusion, Aggregation, and Validation
Given a payer template is created with required fields (aggregation window, unit type, included disciplines, cross-coverage rules, rounding rule), when the template is saved, then all required fields are validated and the template saves only if validation passes. Given a payer template is missing a required field or contains conflicting rules (e.g., visits and minutes both selected), when the user attempts to save, then the save is blocked and field-level errors identify the conflicts. Given a payer template is assigned to an authorization, when quotas are computed, then included disciplines are counted and excluded disciplines are ignored exactly per the template, and aggregation is performed by the configured window (week/month/episode).
Unit Accounting: Visits vs Minutes/Units
Given an authorization with unit type "visits" and quantity N, when visits are completed, then used increases by 1 per qualifying visit and remaining = N - used. Given an authorization with unit type "minutes" or "15-min units" and a configured rounding rule, when a visit with documented duration is completed, then used is computed by converting duration according to the rounding rule in the payer template and remaining reflects this conversion. Given a visit spans multiple disciplines with documented minute allocations, when quotas are computed, then minutes/units are apportioned to each discipline per the documented split; if allocation is missing, the system applies the template default allocation rule and flags the visit for review.
Concurrent Authorizations Allocation
Given multiple overlapping authorizations exist for the same client and discipline, when a visit is counted, then units are allocated according to the configured strategy (default: earliest-expiring first), and no unit is double-counted across authorizations. Given one authorization is exhausted, when subsequent visits occur within the overlap, then usage is allocated to the next eligible authorization automatically. Given overlapping authorizations from different payers, when quotas are computed, then each payer’s aggregation and unit rules are applied independently without cross-payer aggregation.
Cross-Discipline Coverage and Roll-Up Totals
Given a payer template allows cross-coverage (e.g., RN covering PT) with a defined cap per window, when a covering visit occurs, then the units decrement the covered discipline’s quota up to the cap and any excess is flagged as overage risk. Given a payer template prohibits cross-coverage, when a visit by another discipline occurs, then it does not decrement the protected discipline’s quota and a warning indicates a billing mismatch if configured. Given per-discipline quotas are computed, when the user views the combined roll-up, then combined used equals the net sum after cross-coverage transformations, and combined remaining equals combined authorized minus combined used.
UI Filters and Toggles for Per-Discipline and Combined Views
Given the user selects specific disciplines in the Quota Meter filter, when the selection is applied, then only the selected disciplines’ meters are shown with counts matching backend calculations exactly. Given the user toggles between Per-Discipline and Combined views, when the toggle is changed, then the view updates immediately and displayed totals remain numerically consistent between views. Given the user changes the schedule (add, move, cancel a visit), when the change is saved, then the Quota Meter updates the affected counts in the current view within 2 seconds.
Parity Between Quota Meter and Billing Rules
Given a billing export is generated for a date range and payer, when comparing totals, then the billable units/visits per discipline and combined exactly match the Quota Meter used counts for the same scope (difference = 0). Given a discrepancy is detected during automated parity checks, when the check runs nightly, then a discrepancy report lists the affected visits, applied rules, and differences, and the status is set to "Needs Review". Given unit tests for representative payer scenarios (visits, minutes, rounding, cross-coverage), when the test suite runs, then all pass 100% before release.
Risk Indicators & Compliance Ticker
"As a caregiver, I want clear color cues and a days-left indicator so that I can tell at a glance whether my visits keep the patient compliant."
Description

Delivers always-on visual indicators that reflect utilization risk using accessible color cues (green/amber/red/grey) and a dynamic "days left to stay compliant" ticker per authorization window. Thresholds are configurable (e.g., percentage remaining, projected cadence shortfall), and tooltips explain the current state and next recommended action. Indicators update live as the schedule changes, support offline-read with background sync, and respect locale/time zone settings. Designed for mobile-first responsiveness and WCAG-compliant contrast.

Acceptance Criteria
Live Recalculation on Reschedule
Given an authorization window with an existing visit quota and scheduled visits When a user adds, cancels, or reschedules a visit within that window and saves the change Then the risk indicator color and used/remaining counts update within 2 seconds of save And the "days left to stay compliant" ticker recalculates to reflect the new end-date proximity and cadence And the updated state is reflected consistently in list, calendar, and patient profile views without a page refresh And if the server rejects the change, the indicator reverts to the last confirmed state and an error message is shown
Configurable Risk Thresholds Persist and Apply
Given default risk thresholds are set (e.g., Green ≥ 40% remaining, Amber 15–39%, Red < 15%, Grey = no/expired auth) When an admin updates threshold values (percent remaining and projected cadence shortfall rules) and saves Then the new thresholds persist and are retrievable via configuration API And indicators across the app apply the new thresholds within 60 seconds without requiring user sign-out And a visit plan with projected cadence below required pace triggers at least Amber regardless of percent remaining And test data at boundary values (exactly 40%, 39%, 15%) map to the correct colors per configuration
Days-Left Ticker Accuracy Across Time Zones
Given an authorization window with a start and end datetime in the payer/agency time zone When a user in any locale/time zone views the ticker Then "days left" equals the count of whole days until the window end at 23:59 in the payer/agency time zone And the value rolls over at local midnight of the payer/agency time zone, not the user's device time And date/time formatting respects the user's locale settings without altering the underlying calculation And daylight saving transitions do not skip or double-count days
Tooltip Explanations with Next Action Guidance
Given a visible risk indicator and ticker When the user hovers (web) or long-presses (mobile) the indicator Then a tooltip appears within 300 ms with text that includes: used vs. total visits, percent remaining, projected cadence vs. required, days left, and a concise recommended next action And the tooltip is dismissible by tap/click/escape and does not obscure primary actions And the tooltip content is accessible to screen readers via ARIA associations and is keyboard reachable And the tooltip text is localized per the user’s language/locale
Offline Read with Background Sync on Reconnect
Given the device is offline with previously synced utilization data When the user opens a screen with indicators Then the last-synced counts, color, and days-left display with an "offline" or "last updated" timestamp And user attempts to change the schedule shows an optimistic update flagged as offline-pending When connectivity is restored Then background sync runs and reconciles pending changes within 10 seconds And indicators update to the server-confirmed state; any conflicts are resolved by server rules and the UI reflects the final outcome And if sync fails, the user sees a non-blocking alert and the last confirmed state remains visible
Mobile Responsiveness and WCAG Contrast Compliance
Given a device width between 320 px and 768 px When viewing the indicators and ticker Then the layout fits without horizontal scroll and key targets are at least 44x44 px And information is not conveyed by color alone; an icon and/or label accompanies each color state And text and icon contrast ratios meet WCAG AA (≥ 4.5:1 for text, ≥ 3:1 for large text/icons) And the component remains usable with system font scaling up to 200% without overlap or truncation of critical information
Multi-Discipline Risk View Toggle and Aggregation
Given a patient with multiple authorized disciplines (e.g., PT, OT, ST) When the user switches the discipline filter between a specific discipline and "All" Then the indicator and ticker update to reflect only the selected discipline or the aggregate across disciplines And in the aggregate view, the displayed color reflects the highest risk level among included disciplines And disciplines without a current authorization show Grey with an explanatory tooltip And the selected filter persists for the session and per-user preference
Reschedule Impact Preview
"As a coordinator, I want to preview the quota impact before confirming reschedules so that I can avoid creating non-compliant schedules."
Description

Provides an interactive preview that shows how proposed scheduling changes will affect quotas and compliance before confirmation. When dragging, editing, or bulk-rescheduling visits, the UI displays projected remaining counts, risk state changes, and warnings about potential denials or underutilization. Includes low-latency calculations via pre-fetched authorization context, supports undo, and suggests compliant alternatives when conflicts are detected.

Acceptance Criteria
Instant Preview on Single-Visit Drag
Given a scheduled visit with pre-fetched authorization context is visible in the calendar When the user drags the visit to a different date/time (including across week/month boundaries) Then the preview renders projected used and remaining counts for all affected authorization windows (week, month, episode) within 300 ms at the 95th percentile And the Quota Meter risk state recomputes for the projection and displays the correct state badge and color per the risk rules And the days-left-to-stay-compliant ticker updates to reflect the projected schedule And no network calls are made to compute the initial preview
Bulk Reschedule Impact Summary
Given the user selects 10–100 visits across one patient and one or more disciplines within the same episode When the user applies a bulk shift or pattern and opens the preview Then the preview displays per-discipline and aggregate used/remaining counts by each affected window (week, month, episode) And the preview lists the count of visits projected to exceed authorization caps and the count projected to fall short of utilization targets per window And the preview renders within 800 ms at the 95th percentile for up to 100 visits And after confirming, the live Quota Meter values match the preview values exactly
Conflict Warnings and Color Cues
Given the projected schedule would cause an over-cap in any authorization window When the preview is shown Then a denial-risk warning with a red indicator and the impacted window names is displayed And for underutilization risk flagged by the Quota Meter engine, an amber warning is displayed listing impacted windows And each warning includes an icon, color cue, and an accessible text label describing the risk and impacted windows And warnings appear within 300 ms of the user action that triggered the preview
Compliant Alternative Suggestions
Given the preview detects a conflict (over-cap or underutilization risk) When the user opens the Alternatives panel Then the system presents at least 2 compliant alternative time slots within the relevant authorization window if available And alternatives are sorted by earliest compliant date/time And each alternative displays the projected used/remaining counts if selected And selecting an alternative updates the preview within 300 ms at the 95th percentile and clears the conflict warning if resolved
Undo After Apply
Given a set of previewed changes has been confirmed and applied to the schedule When the user clicks Undo within 60 seconds of the commit Then the schedule reverts to the exact prior state And the Quota Meter used/remaining counts and risk states revert to their pre-commit values within 500 ms at the 95th percentile And the action is logged with timestamp, user, and change summary
Multi-Discipline Counters Integrity
Given a patient has multiple active authorizations (e.g., PT and OT) with distinct windows When the user previews moving a visit belonging to one discipline Then only that discipline’s counters change in the preview; other disciplines’ counters remain unchanged And aggregate totals reflect the sum of per-discipline counters And cross-window moves adjust counts in both source and destination windows accordingly
Audit Trail & Exportable Compliance Report
"As an agency owner, I want an exportable audit report of quota usage and changes so that I can satisfy payer audits and reduce denials."
Description

Captures a complete, immutable audit trail of quota-affecting events (schedule edits, documentation status changes, authorization updates), including actor, timestamp, source, and old/new values. Generates one-click, audit-ready PDF/CSV reports by patient, payer, authorization window, or timeframe, summarizing utilization, exceptions, and corrective actions. Integrates with CarePulse reporting, supports branding and electronic signatures, and redacts PHI according to export role and purpose.

Acceptance Criteria
Immutable Audit Event for Quota-Affecting Changes
Given a permitted user changes a quota-affecting item (visit schedule, documentation status, or authorization limits) via web or mobile When the change is saved Then the system appends exactly one immutable audit record containing: event_id, entity_type, entity_id, event_type, actor_id, actor_role, source (web/mobile/api/iot/import), timestamp_utc (ISO 8601), old_value, new_value, reason (optional), correlation_id, authorization_window_id And the record is write-once (no update/delete APIs), hash-chained to the previous record, and attempts to modify/delete are blocked and logged as a security event And events are ordered by timestamp_utc and a monotonic sequence number per entity
Audit Event Latency and Offline Sync
Given online connectivity When a quota-affecting change is saved Then the audit event is persisted on the client within ≤2 seconds and synced to the server within ≤10 seconds at p95 And if offline, the event is queued locally with device timestamp and synced on reconnection; the server records both device and server receipt times And the Quota Meter counters and audit log reflect the change within the same sync window
Filterable Audit Trail View by Authorization Window
Given an operations manager opens the Audit Trail When they filter by patient, payer, authorization window, discipline, date range, event type, actor, source, and exception flag Then the results include only matching events with pagination and column sorting, returning the first page within ≤3 seconds for up to 10,000 events And the user can select any event to view full old/new values and links to related corrective actions
One-Click PDF/CSV Compliance Export with Branding and e-Signature
Given the user selects Export and chooses PDF or CSV and a scope (patient, payer, authorization window, or timeframe) When they click Export Then the file is generated within ≤15 seconds for up to 50,000 events and includes: branding (logo/name/address), report title, generation timestamp, applied filters, page numbers, electronic signature block (PDF) or signature metadata (CSV), and a SHA-256 checksum of the exported audit slice And each row includes: event_id, event_type, actor_id, actor_role, source, timestamp_utc, entity identifiers, old_value, new_value, exception flag, corrective_action_id And a summary section shows utilization (used, remaining, percent), exceptions count by type, and a list of corrective actions with timestamps and actors
Role- and Purpose-Based PHI Redaction on Export
Given the user’s role and selected purpose-of-use (e.g., Payer Audit, Internal QA) When viewing or exporting the audit trail Then PHI fields (patient name, DOB, MRN, address, phone, GPS, free-text notes, voice transcript) are redacted/tokenized per policy for that role/purpose, and non-PHI identifiers remain visible And a redaction banner and audit metadata state the applied policy, purpose, and requester And attempts by unauthorized roles to include PHI are blocked with an error and logged
Save and Schedule Reports via CarePulse Reporting Integration
Given the user saves an export configuration When they choose Save to Reporting and optionally schedule delivery Then the configuration appears in CarePulse Reporting with identical filters, RBAC, and redaction policy And scheduled deliveries (email/SFTP) are sent at the configured cadence with delivery logs and failures recorded in the audit trail And recipients must authenticate or use time-bound signed links; access outside policy is blocked and logged
Role-based Visibility & Data Privacy
"As a compliance officer, I want quota data access scoped by role so that sensitive information is protected while staff still get the insights they need."
Description

Implements RBAC to control access to quota data: caregivers see only their caseload and minimal PHI; coordinators and managers can view/edit broader scopes; compliance officers can access audit views. Enforces API scopes, field-level masking, and least-privilege defaults. Logs access events, encrypts data in transit and at rest, and aligns with HIPAA and applicable regional privacy requirements without exposing unnecessary patient details in counters or indicators.

Acceptance Criteria
Caregiver Caseload-Only Quota Visibility
Given I am authenticated as a caregiver with assigned patients A and B When I open the Quota Meter Then I only see quota counters for patients A and B And attempting to access patient C via UI deep link or API returns 403 Forbidden and is logged as denied And patient identifiers are masked as FirstInitial LastInitial • MRN last-4 (e.g., J D • ****1234) And no DOB, address, phone, diagnoses, or notes are displayed in counters, color cues, or tickers
Coordinator and Manager Scoped Visibility With Edit Rights
Given I am a coordinator assigned to Branch X When I view the multi-discipline Quota Meter Then I can view and edit quotas and schedules for caregivers and patients in Branch X only And I can filter by discipline (RN, PT, OT, ST, HHA) without seeing data outside Branch X And edits update counters within 2 seconds and are logged with actor, scope, and before/after values And attempting to view or edit entities outside Branch X returns 403 and is logged
Compliance Officer Read-Only Audit View With PHI Minimization
Given I am a compliance officer When I open the Audit View Then all quota data is read-only and exportable to CSV And PHI is minimized to masked identifiers and authorization window metadata only And exporting requires selecting a purpose-of-use and adds a watermark with user, timestamp, and purpose And export actions are logged with checksum and retained for 6 years
API Scope Enforcement for Quota Endpoints
Given a service account token with scope quota.read When it calls GET /quota Then the response includes only counters within the token's org scope and excludes PHI fields And calls to POST or PATCH endpoints return 403 without quota.write And with quota.write, modifications are allowed only within scope and responses never include PHI unless phi.view is present And requests without a valid token return 401 And exceeding 100 requests per minute returns 429 with a Retry-After header And all API calls are logged with request ID, scope, and outcome
Field-Level Masking and Step-Up Unmasking
Given I have the phi.unmask permission When I click Unmask on a patient row Then I must complete MFA and provide an audit reason before PHI is revealed And unmasked fields auto-remask after 5 minutes of inactivity or on navigation away And unmask events are logged with patient reference, user, role, reason, timestamp, and duration And users without phi.unmask never see the Unmask control
Encryption In Transit and At Rest
Given any client connects to the API When using HTTP or TLS versions below 1.2 Then the request is redirected to HTTPS or refused and no payload is processed And all data at rest in primary storage and backups is encrypted with AES-256 via managed KMS with 90-day key rotation And HSTS with max-age of at least 180 days is enabled on web endpoints And certificates are valid and not expired, and weak ciphers are disabled
Least-Privilege Defaults and Immutable Access Logging
Given a new user is created When no role is explicitly assigned Then the user receives the Caregiver role with minimal scopes and no PHI unmask permission by default And revoking a role invalidates sessions and API tokens within 60 seconds And every access to quota data creates an immutable log entry with user, role, action, scope, patient hash, timestamp, IP, and result And audit logs are write-once, tamper-evident, and retained for 6 years
Configuration & Threshold Management
"As an admin, I want to configure thresholds and payer rules so that the Quota Meter reflects our agency’s contracts and operational policies."
Description

Provides an admin console to configure risk thresholds, color mappings, payer rule templates, discipline mappings, cadence definitions, grace periods, and holiday exceptions. Supports versioned configurations with preview and rollback, import/export for multi-agency setups, and validation to catch conflicting rules. Changes propagate safely with feature flags and background re-computation where needed to keep counters accurate.

Acceptance Criteria
Configure multi‑level risk thresholds and color cues
Given an Admin is in the Configuration console When they define thresholds at 60%, 80%, and 100% utilization mapped to Yellow, Orange, and Red for Weekly windows and publish Then Quota Meters for affected payer templates display the corresponding colors within 2 minutes without requiring app reload on mobile or web And patients crossing a threshold due to rescheduling reflect the new color in real time (≤15 seconds after the schedule change is saved)
Prevent saving conflicting thresholds and rules
Given a draft configuration contains an 80% utilization threshold for Payer A Weekly window When the Admin attempts to add another threshold that overlaps (e.g., 75%–85%) or duplicates color mapping for the same window Then the system blocks publish and saving with a validation error listing the conflicting rule IDs and fields And the API responds 422 with machine‑readable error codes and paths to invalid properties And no partial changes are persisted
Define payer template with cadence, grace period, holidays, and discipline mappings
Given the Admin creates payer template "BlueShield Std" as Monthly window with cadence 8 visits/month, grace period 2 days, holidays set to US Federal, discipline PT When the template is applied to Agency X Then remaining visits and days‑left calculations skip configured holidays and include grace days for PT only And multi‑discipline views show PT using the template while other disciplines fall back to their assigned templates or defaults And changes to the holiday calendar recalculate days‑left within 5 minutes
Versioning with preview, staged publish, and rollback
Given a Published v1 configuration exists When the Admin clones to Draft v2, edits thresholds, and clicks Preview Then the system displays a simulated impact report for the last 14 days showing utilization deltas and color changes across at least 50 sample patients When the Admin publishes v2 behind feature flag "quota_config_v2" to a 10% agency cohort Then only flagged agencies receive v2 while others remain on v1 When the Admin performs a rollback Then v1 becomes active for all agencies within 5 minutes and v2 is retained with status Rolled Back
Import/export with validation and dry‑run for multi‑agency setup
Given the Admin exports Agency A configuration Then a signed JSON package is produced including versions, payer templates, thresholds, discipline mappings, cadence/holiday/grace settings, and checksums When the Admin performs a dry‑run import into Agency B Then the system reports the exact changes and conflicts by identifier without persisting any changes When the Admin confirms import Then non‑conflicting items are imported into a new Draft, conflicts are skipped with an error report, and no existing Published configuration is altered until publish
Feature‑flagged propagation and background re‑computation SLA
Given a published configuration change that affects payer templates When applied to an agency gated by a feature flag Then background jobs recompute Quota Meter counters for all active patients/episodes with end‑to‑end latency ≤15 minutes And displayed counters remain available during recomputation with maximum data staleness ≤15 minutes And an Admin can view recomputation progress with percent complete and ETA
Admin‑only access to configuration console and APIs
Given users with roles Admin, Scheduler, and Caregiver When accessing the Configuration & Threshold Management UI or APIs Then only Admins can view and edit; non‑Admins cannot access and receive 403 for write and 404/hidden for UI routes And all configuration changes are attributed to the acting Admin and timestamped in the version history

Slot Assist

One‑tap recommendations for the next compliant timeslot that honor plan‑of‑care rules, caregiver credentials, client preferences, and live route realities. Presents the top three slots with a short rationale (meets 2x/week spacing, within auth window, minimal detour). Schedulers place visits faster and keep teams on‑time and in‑policy.

Requirements

Plan-of-Care Compliance Rules Engine
"As a scheduler, I want Slot Assist to only recommend time slots that satisfy each client’s plan-of-care and payer authorization rules so that scheduled visits are compliant and billable."
Description

Implements a centralized rules engine that evaluates candidate timeslots against each client’s plan-of-care (frequency per period, spacing between visits, time-of-day constraints), payer authorization windows (start/end dates, service units, daily/weekly caps), visit duration, and service-type rules. Integrates with CarePulse’s plan-of-care records and payer authorization data to produce pass/fail results with machine-readable reason codes. Supports hard vs. soft rules, multiple payers per client, and effective-dated rule changes. Exposes a scoring API consumed by Slot Assist to filter and rank only billable, policy-compliant slots, ensuring recommended times can be submitted and reimbursed without exceptions.

Acceptance Criteria
Plan‑of‑Care Frequency, Spacing, and Time‑of‑Day Validation
Given a client with POC frequency 2 visits per week, minimum spacing 2 full calendar days between starts, allowed time window 08:00–18:00 local, and an existing visit on Mon 2025-09-08 09:00–10:00 When evaluating candidate slots Tue 2025-09-09 10:00–11:00, Wed 2025-09-10 10:00–11:00, and Fri 2025-09-12 17:00–18:00 Then Tue is_compliant=false with reason_codes=["POC_SPACING_VIOLATION"] And Wed is_compliant=true with reason_codes=["POC_FREQ_MET"] And Fri is_compliant=true with reason_codes=["POC_FREQ_MET"] And any candidate with start outside 08:00–18:00 is_compliant=false with reason_codes=["POC_TOD_VIOLATION"]
Payer Authorization Window and Unit Caps Enforcement
Given payer authorization A for service S valid 2025-09-01 to 2025-09-30, weekly cap 16 units, daily cap 2 units, remaining_this_week 4 units, unit=30 minutes When evaluating a 120-minute slot on 2025-09-28 Then is_compliant=false with reason_codes=["AUTH_DAILY_CAP_EXCEEDED"] and selected_payer_id is null When evaluating a 60-minute slot on 2025-09-28 Then is_compliant=true with reason_codes=["AUTH_WITHIN_CAPS"] and selected_payer_id="A" When evaluating a 60-minute slot on 2025-10-01 Then is_compliant=false with reason_codes=["AUTH_WINDOW_OUTSIDE"] and selected_payer_id is null
Hard vs. Soft Rule Evaluation and Scoring Impact
Given a hard rule "caregiver license expired" and a soft rule "client prefers afternoons 12:00–18:00" and a candidate slot at 07:00 with caregiver license expired When evaluated Then is_compliant=false, hard_blocked=true, score=0, reason_codes includes ["LICENSE_EXPIRED_HARD"] Given caregiver license valid and the same slot at 07:00 outside preference When evaluated Then is_compliant=true, hard_blocked=false, reason_codes includes ["CLIENT_PREFERENCE_TIME_SOFT"], and score decreases by exactly 10 points compared to the same slot evaluated with the preference rule disabled
Multiple Payers Selection and Billable Outcome
Given client has two active payers A and B for service S on 2025-09-20; payer A remaining_this_week=0 units; payer B remaining_this_week=4 units When evaluating a 60-minute slot Then is_compliant=true, selected_payer_id="B", reason_codes includes ["PAYER_SELECTED_B","AUTH_UNITS_AVAILABLE"], and service_units=2 Given both payer A and B have remaining_this_week=0 units When evaluating a 60-minute slot Then is_compliant=false, selected_payer_id is null, reason_codes includes ["ALL_PAYERS_NONBILLABLE"]
Effective‑Dated Rule Changes Applied by Slot Date
Given POC minimum spacing changes from 2 days to 1 day effective 2025-09-15 and a prior visit on Mon 2025-09-08 09:00 When evaluating Tue 2025-09-09 09:00 Then is_compliant=false with reason_codes=["POC_SPACING_VIOLATION"] and effective_rule_version references the pre-2025-09-15 rule set When evaluating Tue 2025-09-16 09:00 Then is_compliant=true with reason_codes=["POC_FREQ_MET"] and effective_rule_version references the 2025-09-15 rule set or later
Scoring API Response, Schema, and Ranking Behavior
Given a POST /rules/score request with up to 50 candidate slots and sort=rank and limit=3 When processed Then P95 latency <= 300 ms and HTTP 200 with Content-Type application/json And each item in response contains slot_id, is_compliant (boolean), hard_blocked (boolean), score (integer 0–100), selected_payer_id (nullable string), reason_codes (array of strings), service_units (integer), effective_rule_version (string) And items are sorted by score desc then by detour_minutes asc when sort=rank is requested And only is_compliant=true items are included when filter=compliant is requested And only top 3 items are returned when limit=3 is provided When request payload is malformed Then HTTP 400 is returned with error_code and message non-empty
Service‑Type Coverage and Visit Duration Compliance
Given POC allows service_types=["HHA"] and payer covers service_types=["HHA"], min_duration=45 minutes, max_duration=120 minutes When evaluating a 30-minute HHA slot Then is_compliant=false with reason_codes=["DURATION_TOO_SHORT"] When evaluating a 150-minute HHA slot Then is_compliant=false with reason_codes=["DURATION_TOO_LONG"] When evaluating a 60-minute RN slot Then is_compliant=false with reason_codes=["SERVICE_TYPE_NOT_IN_POC"] When evaluating a 60-minute HHA slot Then is_compliant=true with reason_codes=["SERVICE_TYPE_OK","DURATION_OK"]
Credential & Eligibility Matching
"As an operations manager, I want recommendations to consider caregiver credentials and client preferences so that assignments are safe, legal, and aligned with the care plan."
Description

Validates caregiver eligibility for a proposed slot by checking active licenses/certifications, skill tags, background clearance, territory coverage, language, and union/contract constraints against the client’s service and preferences (e.g., continuity of care, gender/language preferences, do-not-assign lists). Pulls real-time caregiver availability and maximum daily/weekly hours to avoid overages. Produces an eligibility verdict with reasons (e.g., "license LPN required", "client prefers female caregiver"). Feeds Slot Assist with only caregiver-client-slot combinations that are safe, legal, and preference-aligned.

Acceptance Criteria
License and Certification Validation for Service Type
Given a proposed client service with defined required licenses/certifications and a caregiver profile And the caregiver has license/certification records with jurisdictions and expiration dates When the system validates eligibility for the slot date/time Then the caregiver is marked Eligible only if all required licenses/certifications are present, active, and valid for the service jurisdiction on the slot date And if any required item is missing, expired, or not valid for the jurisdiction, mark Ineligible and include reasons listing each missing/expired requirement with identifiers And the verdict payload includes the jurisdiction used for validation and the service code evaluated
Skill Tags and Language Matching
Given required skill tags for the service and the client's preferred/required language And the caregiver profile includes skill tags (with levels where applicable) and spoken languages When the system validates eligibility for the slot Then mark Eligible only if all required skill tags are present (and at or above minimum level where defined) And mark Eligible only if the caregiver speaks the client's required language; otherwise mark Ineligible with reason "language mismatch" And if any required skill tag is missing or below level, mark Ineligible and list each missing/insufficient tag in reasons
Background Clearance and Union/Contract Compliance
Given client-required background clearances and applicable union/contract rules for the caregiver role And the caregiver has clearance records with statuses and expiration dates When validating the proposed slot Then mark Eligible only if all required clearances are active on the slot date And mark Eligible only if union/contract rules allow assignment (classification allowed, shift/tenure constraints satisfied) And if any clearance is missing/expired or a union/contract rule is violated, mark Ineligible and include specific rule/clearance identifiers in reasons
Territory Coverage Compliance
Given the client's service location and the caregiver's assigned territories and coverage hours When validating the proposed slot time Then mark Eligible only if the client location is within the caregiver's assigned territory and inside the caregiver's territory coverage hours for that day And if outside territory or outside coverage hours, mark Ineligible and include reason "outside territory" or "outside coverage hours"
Real-Time Availability and Hour Caps
Given the caregiver's live availability, current scheduled visits, and configured maximum daily and weekly hours And a proposed slot with start/end time and planned duration When the system validates the slot Then mark Eligible only if the slot does not overlap existing commitments and does not cause daily or weekly totals to exceed maximums And if overlap exists, mark Ineligible with reason "overlaps existing visit" including conflicting visit ID(s) And if max hours would be exceeded, mark Ineligible with reason "exceeds daily max" or "exceeds weekly max" including computed totals before/after
Client Preferences and Restrictions Handling
Given client preferences for continuity of care, gender, and language and a do-not-assign list, each tagged as Required or Preferred When validating the caregiver-client-slot combination Then if a Required preference is not met, mark Ineligible and include the unmet required preference in reasons And if a Preferred preference is not met, keep Eligible but include a non-blocking reason tag indicating the unmet preference And always block any caregiver on the client's do-not-assign list with reason "do-not-assign"
Eligibility Verdict and Slot Assist Integration
Given all validations complete for a caregiver-client-slot combination When producing the verdict Then produce a verdict of Eligible or Ineligible with a machine-readable list of reason codes and human-readable summaries And send only Eligible combinations to Slot Assist; exclude Ineligible combinations while making their reasons available via diagnostics And complete eligibility evaluation within 1 second per combination at the 95th percentile under typical load
Route-Aware Detour & Lateness Scoring
"As a dispatcher, I want recommendations that minimize detours and late arrivals based on live routes so that caregivers stay on time and reduce travel stress."
Description

Calculates travel-time impact and lateness risk for inserting a visit into a caregiver’s live route by leveraging GPS location, current day schedule, visit durations, service location geocodes, and live traffic from the mapping provider. Computes incremental detour minutes, on-time probability, and buffer adherence while respecting parking/walk-in offsets and mandated rest/travel buffers. Returns a normalized score and key metrics for each candidate slot to help Slot Assist prioritize minimal-detour, on-time options.

Acceptance Criteria
Compute Incremental Detour Minutes vs Baseline Route
Given a caregiver’s live GPS point, today’s scheduled visits with start/end times and durations, a candidate visit with geocode and duration, and live traffic ETAs from the mapping provider at timestamp T When the candidate visit is inserted between Visit A and Visit B Then incremental_detour_minutes equals (total travel minutes with insertion − total travel minutes without insertion) including configured parking_walk_offset_minutes at all affected legs And incremental_detour_minutes is within ±2 minutes of the difference computed by the mapping provider’s route APIs for the two routes at timestamp T And incremental_detour_minutes is rounded to the nearest whole minute and is >= 0
On-Time Probability Boundaries and Monotonicity
Given a candidate visit with a service window [window_start, window_end], preceding and following visits, mandated minimum travel/rest buffers, and live traffic ETAs When on_time_probability is computed for inserting the candidate Then on_time_probability is returned in [0.00, 1.00] with two-decimal precision And on_time_probability = 0.00 if the earliest feasible arrival (including travel, parking/walk offsets, and mandated buffers) is after window_end And on_time_probability does not increase when any travel or offset time is increased while all other inputs are held constant And on_time_probability increases (or remains the same) when the service window is widened while all other inputs are held constant
Buffer Adherence and Compliance Flagging
Given mandated_min_travel_buffer_minutes and mandated_min_rest_buffer_minutes When a candidate visit is placed between two scheduled visits Then buffer_minutes_remaining_before and buffer_minutes_remaining_after are calculated And buffer_adherence is true only if both remaining buffers are >= their mandated minimums And if buffer_adherence is false, the candidate is marked non_compliant=true and normalized_score=0
Parking/Walk-In Offsets Applied to Travel and Arrival
Given configured parking_walk_offset_minutes for origin and destination locations When computing travel and arrival times for a candidate insertion Then the offsets are added to the corresponding legs used to compute incremental_detour_minutes and on_time_probability And increasing either offset by N minutes increases incremental_detour_minutes by N (±1 minute due to rounding) and does not increase on_time_probability
Normalized Score and Metrics Output Contract
Given a computed candidate slot When the API returns results Then it includes: normalized_score (integer 0–100), incremental_detour_minutes (integer), on_time_probability (decimal with two decimals), buffer_adherence (boolean), buffer_minutes_remaining_before (integer), buffer_minutes_remaining_after (integer), will_be_late (boolean), lateness_risk_band (Low|Medium|High) And lateness_risk_band is assigned: High if on_time_probability < 0.40, Medium if 0.40–0.79, Low if >= 0.80 And normalized_score strictly decreases when incremental_detour_minutes increases while all other inputs are held constant And normalized_score strictly increases when on_time_probability increases while all other inputs are held constant And all duration values are in minutes and any timestamps returned are ISO 8601 with timezone
Live Recompute on GPS/Schedule Changes
Given a caregiver’s GPS update (movement > 100 meters) or an edit to today’s schedule affecting adjacent visits When either event occurs Then the system recomputes incremental_detour_minutes, on_time_probability, buffer_adherence, and normalized_score for affected candidate slots within 10 seconds And the computation uses a traffic data snapshot no older than 2 minutes at the time of recompute
Top-3 Recommendations with Rationale & One-Tap Placement
"As a scheduler, I want a simple list of the best three compliant slots with clear reasons so that I can schedule quickly and confidently."
Description

Provides a mobile-first UI component that displays the top three compliant timeslots, each with a concise rationale string (e.g., "meets 2x/week spacing; within auth window; +4 min detour"). Supports one-tap placement that atomically books the selected slot, updates the caregiver’s route, and creates the visit in CarePulse scheduling. Shows confidence indicators, visit duration, and arrival window. Designed for accessibility (WCAG AA), fast interaction, and small-screen usability. Integrates with conflict detection to prevent double-booking at confirmation time.

Acceptance Criteria
Display Top-3 Compliant Timeslots with Rationale
Given a visit request with defined plan-of-care rules, caregiver credentials, client preferences, auth window, and current route When Slot Assist computes recommendations Then 1 to 3 timeslots are displayed, sorted by descending suitability score And each slot shows visit duration (in minutes) and an arrival window (local time, HH:MM–HH:MM) And each slot shows a rationale string of 2–3 clauses separated by "; " and <= 90 characters And allowed rationale clauses include only: plan-of-care spacing met, within auth window, credential match, client preference met, detour in minutes (e.g., "+4 min detour")
One‑Tap Placement Is Atomic and Updates Schedule and Route
Given a recommended slot is visible When the user taps the slot once Then the system atomically creates the visit in CarePulse scheduling, assigns it to the caregiver, and updates the caregiver’s route and ETAs And no secondary confirmation or multi-step flow is required And on success, a success state is shown within 1.5 s at p95; on failure, no partial data persists and a clear error is shown within 2 s
Conflict Detection Prevents Double-Booking at Confirmation
Given another scheduler books an overlapping visit for the same caregiver after recommendations are shown When the user taps a now-conflicting slot Then the system detects the conflict at confirmation time and aborts the booking And the user sees an error with the conflict reason and no visit or route changes are committed And recommendations refresh within 2 s to reflect current availability
Accessibility and Small-Screen Usability (WCAG AA)
Given a 320x568 viewport and system font scaling at 100–200% When recommendations are displayed Then there is no horizontal scrolling and all critical content is visible without zoom And touch targets are at least 44x44 dp and have 8 dp spacing And color contrast meets at least 4.5:1 for text and 3:1 for large icons And all interactive elements have accessible names/roles and are announced correctly by screen readers And keyboard navigation order is logical with visible focus, satisfying WCAG 2.1 AA 1.4.3, 2.1.1, 2.4.3, and 2.4.7
Confidence Indicators, Duration, and Arrival Window Are Accurate
Given recommendations are displayed Then each slot shows a confidence indicator with label High/Medium/Low derived from a 0–1 score (>=0.8 High, 0.5–0.79 Medium, <0.5 Low) And visit duration reflects the plan-of-care prescribed duration ±0 minutes unless overridden by agency policy And arrival window reflects live routing estimates for the caregiver’s updated route And all times are shown in the user’s locale and time zone
Graceful Handling When Fewer Than Three Compliant Slots Exist
Given constraints result in fewer than three compliant slots When recommendations render Then only the available compliant slots (0–2) are displayed without placeholders And if zero slots exist, an empty state explains why (e.g., outside auth window or credential mismatch) in <= 120 characters And a Retry action is provided to recompute recommendations after constraints change
Real-Time Sync & Conflict Resolution
"As a scheduler, I want Slot Assist to detect conflicts and update in real time so that I don’t book invalid or overlapping visits."
Description

Continuously syncs with schedule, authorization balances, and caregiver availability to invalidate stale recommendations when inputs change (e.g., new booking, cancellation, auth cap reached). Implements optimistic locking and atomic transaction boundaries on placement to prevent overlaps, over-allocations, or expired auth booking. Provides graceful fallbacks (auto-advance to next slot), and user notifications when a selection becomes invalid. Ensures Slot Assist remains trustworthy in dynamic, multi-user environments.

Acceptance Criteria
Invalidate Stale Slots on Concurrent Booking
Given Scheduler A is viewing the top three recommended slots for Client X with Caregiver Y And Scheduler B books one of those same slots When Scheduler B’s booking is committed Then Scheduler A’s UI marks the affected recommendation as invalid within 2 seconds of commit And the invalid recommendation is not tappable for placement And a reason is shown (e.g., "Booked by [user] at [time]") And the list refreshes to present a new top three excluding the taken slot within 2 seconds And if Auto-Advance is enabled, focus moves to the next valid slot automatically
Atomic Placement With Optimistic Locking
Given each recommendation includes a version token representing schedule, availability, and authorization state When a user taps Place on a recommendation Then the system validates the token and all constraints atomically And if valid, commits the visit with the exact recommended start/end and returns a placementId within 1.5 seconds And if any constraint fails, the transaction aborts with no partial writes And the user sees a conflict message with specific reason(s) and refreshed recommendations And in a race where two users place the same slot, only one succeeds; the other receives a conflict message
Authorization Cap Reached Mid-Flow
Given the client’s authorization has remaining units at recommendation time And another action (booking/import) consumes the remaining units before placement When the user attempts to place a recommendation that would exceed authorization Then placement is rejected and the recommendation is invalidated within 2 seconds of the cap change And the UI displays "Authorization cap reached" with remaining balance = 0 (or precise remaining units) And refreshed recommendations exclude options that exceed authorization constraints
Caregiver Availability or Credential Change During Selection
Given recommendations were generated for Caregiver Y meeting availability and credential rules And Caregiver Y becomes unavailable or a required credential expires/changes When the change is saved or detected by the system Then any recommendations involving Caregiver Y are invalidated within 2 seconds And the UI shows the reason (e.g., "Caregiver unavailable" or "Credential no longer valid") And the system refreshes and shows alternative compliant slots per configuration (or "No compliant slots" if none)
Cancellation Frees Overlapping Slot and Recommendations Refresh
Given a scheduled visit overlapping a potential recommended time is cancelled by another user When the cancellation is committed Then the Slot Assist list refreshes within 2 seconds to include the newly freed compliant slot (if applicable) And the rationale indicates the change (e.g., "Opened by cancellation at HH:MM") And the new slot is selectable and passes placement validation
Connectivity Degradation and Sync Status Handling
Given the device loses real-time connectivity or round-trip latency exceeds 3 seconds (p95) When the user is viewing recommendations Then a visible "Sync paused" banner with last-updated timestamp is shown And placement is disabled if recommendation staleness exceeds 30 seconds And upon connectivity restoration, recommendations auto-refresh within 2 seconds and placement is re-enabled And no offline placement attempts result in server-side partial writes
Audit Trail & Explainability Logs
"As a compliance officer, I want an audit-ready explanation of why a slot was recommended so that we can demonstrate policy adherence during audits."
Description

Captures and stores an audit record for each recommendation and placement, including input snapshot (rules, credentials, route metrics), rule evaluations, scores, rationale text, user actions, and timestamps. Exposes an audit view and export suitable for compliance reviews and payer audits, showing exactly why a slot was recommended or filtered out. Applies data minimization and role-based access controls to protect PHI while enabling transparent, defensible scheduling decisions.

Acceptance Criteria
Log Capture for Slot Recommendations
Given Slot Assist generates top three slot recommendations for a scheduling request When the recommendation list is produced Then an audit record is persisted within 300 ms containing: recommendation_id, request_id, correlation_id, timestamp (UTC ISO 8601), requesting_user_id, client_id, caregiver_pool_ids, plan_of_care_rules_snapshot, caregiver_credentials_snapshot, route_metrics_snapshot, authorization_windows_snapshot And the record includes for each considered slot: slot_id, rule_evaluations (rule_id, result, details), score, rationale_text, inclusion_flag (recommended|filtered_out) And the record stores engine_version and configuration_hash And the record is assigned a cryptographic hash and previous_hash to maintain an append-only chain
Audit Record for Visit Placement Actions
Given a scheduler places a visit using a recommended slot When the placement is confirmed Then an audit record is persisted within 300 ms containing: placement_id, recommendation_id, selected_slot_id, user_id, timestamp, client_id, caregiver_id, before_state (existing visits snapshot), after_state (new visit details), policy_overrides (if any) with user_reason And any manual edits to slot time (delta minutes) are captured with value and rationale And the record stores UI_origin (mobile|web|api) and request_ip
Audit View Accessibility and Filtering
Given a user with role "Audit Viewer" opens the Audit view When they filter by client_id, date range, and outcome (recommended|filtered_out|placed) Then the system returns results within 2 seconds for up to 10,000 records with server-side pagination And each row displays fields: timestamp, event_type, request_id, recommendation_id_or_placement_id, outcome, rationale_summary And selecting a row reveals full explainability details excluding PHI fields for this role And the view provides copy-to-clipboard for correlation_id
Export Compliance Report
Given an auditor selects Export JSON and Export CSV in the Audit view When export is requested for a date range up to 31 days Then the system generates downloadable files within 30 seconds containing all required audit fields per the data dictionary And each file includes a file-level SHA-256 checksum and per-record hash fields And exports exclude PHI by default and include PHI only when the requester has "PHI Access" and an access reason is provided And an export event is logged with requester_id, date_range, filters, record_count, and checksum
Role-Based Access Control and PHI Minimization
Given users with roles Scheduler, Auditor, Admin, and Caregiver access audit records When viewing audit data Then PHI fields (client full name, DOB, address, phone) are visible only to roles with "PHI Access" And users without "PHI Access" see stable pseudonymous IDs and redacted placeholders for PHI And unauthorized PHI access attempts return HTTP 403 and are logged with actor_id, timestamp, and target And stored rationale_text is limited to 256 characters and auto-scrubbed for PHI patterns (SSN, MRN, phone, address) with a 99% detection rate in tests
Immutability and Tamper Detection
Given audit records are stored in the logging datastore When a user attempts to edit or delete an existing audit record via API or UI Then the operation is blocked and returns HTTP 405 Method Not Allowed And a new tamper_attempt event is written with actor_id, timestamp, and target_record_id And a daily integrity job validates the hash chain and emits an alert on mismatch or gap And the Audit view surfaces integrity status (current|degraded) for the selected date range
Explainability for Filtered-Out Slots
Given Slot Assist filters out candidate slots during recommendation When viewing the audit record for that recommendation Then the details list at least the top 5 filtered-out slots with for each: slot_id, violated_rules (ids and names), blocking_vs_penalty classification, score_before_penalties, score_after_penalties, and detour metrics (distance_km, time_min) And rule evaluation includes parameter values used at decision time (e.g., spacing_hours=72, auth_window_start/end) And the rationale_text for each recommended slot references the primary rule drivers in no more than 2 sentences
Performance & Resilience SLAs
"As a scheduler in the field, I want fast, reliable recommendations even with spotty connectivity so that I can keep working without delays."
Description

Sets and enforces service-level targets for Slot Assist: p95 recommendation latency under 500 ms for typical day schedules, graceful degradation when traffic or auth services are unavailable (cached rules, last-known travel times), retry/backoff with circuit breakers, and offline-ready UI states with deferred placement when connectivity returns. Adds observability (tracing, metrics, structured logs) and alerting to maintain reliability during peak scheduling hours.

Acceptance Criteria
p95 Latency 6 500 ms (Typical Day Load)
Given a production-like environment with warm service instances and healthy dependencies And load matches the typical-day profile: sustained 20 RPS with bursts to 60 RPS for up to 5 minutes, using realistic payloads (6–10 candidate caregivers, 8–12 open visits) When Slot Assist recommendations are requested during 08:00–11:00 local time Then the p95 end-to-end latency (API gateway request receipt to last byte sent) is ≤500 ms over a rolling 30-minute window And ≥99% of requests return 3 recommendations with rationales populated And 5xx error rate remains <0.1% over the same window
Graceful Degradation: Traffic/Travel-Time Service Outage
Given the routing/travel-time dependency returns errors or timeouts for ≥60 seconds When a scheduler requests Slot Assist recommendations Then the service responds within ≤600 ms using last-known travel-time estimates cached within the past 24 hours And the response includes a degraded flag with reason "cached_travel_times" And recommendations continue to honor plan-of-care and authorization constraints And if no cached travel times exist for a leg, recommendations are returned without travel-time optimization and the rationale includes "travel estimates unavailable"
Graceful Degradation: Authorization Rules Service Outage
Given the authorization rules service is unavailable for ≥60 seconds When a scheduler requests Slot Assist recommendations Then cached authorization windows/rules updated within the past 7 days are used to filter recommendations And the response includes a degraded flag with reason "cached_auth_rules" And no recommendation violates known plan-of-care or authorization windows And if no cached rules exist for the client, the response is returned within ≤400 ms with an empty recommendations list and a rationale entry indicating "authorization rules unavailable"
Dependency Resilience: Retries, Backoff, and Circuit Breakers
Given any dependency invoked by Slot Assist returns 5xx or times out When this occurs on ≥20% of calls within a 1-minute window Then the service applies exponential backoff with jitter (initial delay 200 ms, max delay 2 s, max 3 attempts per call) And a circuit breaker opens after 5 consecutive failures within 30 seconds, remains open for 30 seconds, then transitions to half-open allowing ≤10% probe traffic And while open, fallback paths (cached data) are used and at most 1 retry is attempted per request And metrics and logs record retry counts and breaker state transitions with trace IDs
Offline-Ready UI: Deferred Placement and Auto-Sync
Given the scheduler's device has no network connectivity When the user accepts a recommended slot and taps Place Then the UI shows an offline banner, queues the placement with a locally persisted idempotency key, and displays a Pending badge within 200 ms And no network request is attempted while offline And when connectivity is restored within 12 hours, the queued placement is submitted within 30 seconds, deduplicated by the idempotency key, and the UI updates to Placed upon server confirmation And if the server returns a conflict, the UI displays a Conflict state and does not create a duplicate booking
Observability: Tracing, Metrics, and Structured Logs
Given a Slot Assist recommendation request with or without an inbound trace header When the request is processed Then a trace is created/propagated and spans include request handling, cache lookups, dependency calls, retries/backoff, and degradation decisions And latency histograms (p50, p90, p95) and counters for success, error, and degraded responses are emitted per tenant and per dependency And structured logs include trace_id, tenant_id, request_id, dependency, outcome, and exclude PHI/PII; dashboards reflect metrics and logs within 1 minute of event time
Alerting: Peak-Hour SLO and Degradation Breaches
Given it is a weekday between 08:00 and 11:00 local time When p95 latency exceeds 500 ms for 5 consecutive minutes OR degraded responses exceed 5% for 10 minutes Then a High-severity alert fires to the on-call channel within 2 minutes and includes a runbook link and recent graphs And the alert auto-resolves after the metric returns within threshold for 10 consecutive minutes And duplicate alerts for the same condition are suppressed to ≤2 per hour

Grace Guard

Enforces payer‑specific grace windows and min/max spacing automatically, flagging soft vs. hard stops and proposing allowed catch‑up patterns. Accounts for holidays, client cancellations, and weather exceptions with documented, audit‑ready reason codes. Keeps schedules flexible without slipping into non‑compliance.

Requirements

Payer Rules Engine with Versioning
"As an operations manager, I want to configure payer grace and spacing rules with effective dates so that visits are scheduled and validated for compliance automatically."
Description

Implement a centralized, configurable rules engine that models payer-specific grace windows (early/late arrival thresholds), minimum/maximum visit spacing, allowable frequencies per plan-of-care period, and constraint severity (soft warn vs. hard block). Rules must support effective dating, payer-level defaults with client-level overrides, and reference citations to payer manuals. Expose a deterministic API used by scheduling, routing, mobile, and reporting services to evaluate any proposed or completed visit. Persist decisions with rule IDs and versions to ensure reproducible, audit-ready outcomes. Provide admin UI to author, import, test, and stage rules before promotion to production.

Acceptance Criteria
Deterministic Evaluation API with Persisted Audit Record
Given a valid EvaluateVisit request including payerId, clientId, planOfCarePeriodId, visitType, proposedStart, proposedEnd, caregiverId, serviceLocation, reasonCodes[], and asOf timestamp When POST /rules/evaluate is called Then the response includes decision.allow (boolean), decision.warnings[], violations[] with ruleId, ruleVersion, severity, and citations[], and evaluationId And the same request payload and headers produce a bit-for-bit identical response on repeated calls (deterministic) And an evaluation record is persisted with evaluationId, requestHash, decision payload, appliedRules[ruleId+version], citations, and evaluatedAt, retrievable via GET /rules/evaluations/{evaluationId} And p95 latency for /rules/evaluate is ≤ 300 ms at 100 requests/second in staging with representative data
Effective Dating and Version Selection
Given multiple versions of a rule each with effectiveStart and effectiveEnd timestamps When a visit with visitStart is evaluated and asOf is not provided Then the engine uses the rule version whose effective window contains visitStart and returns the ruleVersion applied When a completed historical visit is evaluated with asOf provided Then the engine uses the rule versions in effect at the asOf time When an admin attempts to save or promote a rule version that overlaps an existing active version for the same payer and constraint Then the save is blocked with a validation error listing the overlapping versionIds And upon promotion of a new version with effectiveStart T, the previous active version’s effectiveEnd is auto-set to T - 1 second to prevent overlap
Payer Defaults with Client-Level Overrides Precedence
Given a payer-level default rule and a client-level override for the same constraint When a visit for that client is evaluated Then the client-level override takes precedence and the decision indicates originLevel = client for the applied rule When the client-level override is expired or absent Then the payer-level default applies and originLevel = payer is returned And the decision payload lists only the effective ruleId and ruleVersion actually applied
Severity Enforcement: Soft Warn vs Hard Block
Given a constraint configured with severity = hard When a proposed or in-progress visit violates the constraint Then decision.allow = false and violations[] contains the blocking reasons with ruleId, ruleVersion, severity = hard, and citations Given a constraint configured with severity = soft When a proposed or in-progress visit violates the constraint Then decision.allow = true and decision.warnings[] contains the warning reasons with ruleId, ruleVersion, severity = soft, and citations And all violations and warnings include humanMessage, machineCode, and affectedFields[]
Frequency and Min/Max Spacing Compliance Within Plan-of-Care
Given a plan-of-care period with frequencyLimit N and minSpacingDays and maxSpacingDays, and payer-specific grace windows for early/late starts When evaluating a proposed visit Then the engine counts completed and scheduled visits within the plan-of-care window, excluding those with cancellation reason codes flagged as excludable and applying holiday/weather exceptions per rule And if adding the visit would exceed N or violate min/max spacing after applying grace rules, decision.allow = false (or true with warnings if severity = soft) and violations[] details include which threshold was exceeded and by how much And the response includes nextEligibleEarliest and nextEligibleLatest timestamps when applicable
Admin UI Rule Authoring, Import, Test, Stage, and Promotion
Given an admin with Rules:Manage permission When creating or editing a rule in the UI Then required fields (payerId, constraintType, parameters, severity, effectiveStart, citations[]) are validated and missing/invalid inputs block save with field-level errors When importing a ruleset file (JSON or YAML) Then the file is schema-validated, invalid records are rejected with line-level errors, and valid records are loaded into Staging without affecting Production When running a test evaluation in the UI against Staging rules with a sample visit payload Then the UI displays the decision, applied ruleIds and ruleVersions, and citations exactly as the API would return When promoting a staged rule/version Then it moves to Active, previous active version’s effectiveEnd is adjusted to prevent overlap, and an audit log entry records who, when, what changed And staging rules never influence production evaluations until promotion is confirmed
Real-time Soft/Hard Stop Validation
"As a scheduler, I want real-time warnings and blocks with clear reasons so that I prevent non-compliant scheduling errors before they happen."
Description

Add live validation in web and mobile scheduling that evaluates visits against the rules engine on create, drag-and-drop, edit, or bulk actions. Flag violations as soft (warning with rationale) or hard (blocking) and display precise, human-readable explanations linking back to the rule and payer reference. Allow soft-stop overrides with required reason codes and capture who/when/why; block hard stops unless an approved exception policy applies. Operate with sub-200ms response time online and provide offline-cached evaluations with eventual sync. Log all decisions to an immutable audit trail for later reporting.

Acceptance Criteria
Real-Time Validation on Create and Edit
Given a scheduler on web or mobile is online When they create a new visit or edit an existing visit and change scheduling-impacting fields (date/time, caregiver, payer, service) Then the system validates the candidate visit against the rules engine before save And the UI displays a validation status badge of OK, Soft Stop, or Hard Stop on the visit row/card And the explanation is human-readable and cites the exact constraint (e.g., grace window, min/max spacing) with concrete values and times And the explanation includes rule_name, rule_id, rule_version, payer_name, payer_id, and a deep link to the rule details and payer reference And visits with Hard Stop cannot be saved unless an approved exception policy is applied
Validation on Drag-and-Drop and Bulk Reschedule
Given the user drags-and-drops a visit in calendar/list view or selects N visits for bulk reschedule When the new time(s) are proposed Then each visit is validated independently before commit and labeled OK/Soft/Hard And the bulk dialog shows aggregate counts of OK, Soft, Hard and a per-visit table of results And the user can commit OK items and Soft items only after providing required reason codes; Hard items are excluded and left unchanged And the post-commit summary lists counts of committed, overridden-soft, and blocked-hard with links to each affected visit
Soft Stop Override with Required Reason Codes
Given a visit returns a Soft Stop validation result When the user attempts to save the visit Then a modal requires selection of a reason_code from the configured catalog and allows an optional note up to 500 characters And the system records override_id, user_id, role, timestamp, device_id, reason_code_id, and note on the visit And the visit saves successfully and the badge shows "Soft Stop — Overridden" with the reason_code label And removing an override requires a note and is logged as a separate audit event
Hard Stop Blocking with Exception Policy
Given a visit returns a Hard Stop validation result When the user attempts to save the visit Then the save is blocked and the message shows the blocking rule, rule link, and payer reference with resolution guidance And if the user selects an applicable approved exception policy (within validity dates and scope) and has permission, the save is allowed And applying an exception requires capturing exception_id, approver_id, approval_timestamp, and validity window; all are recorded on the visit and in the audit log
Online Performance and Reliability Targets
Given the user is online When validation is triggered by create/edit/drag-drop/bulk Then end-to-end latency from user action to visible validation result is <=200ms p95 and <=300ms p99 over the last 7 days And validation request success rate (non-timeout, non-error) is >=99.9% over the last 7 days And on timeout (>500ms) or error, the UI displays "Validation unavailable — retry" and blocks save until a successful validation occurs or the user switches to offline mode
Offline Cached Evaluation with Eventual Sync
Given the device is offline When validation is triggered for create/edit/reschedule Then the system uses the most recent locally cached rules snapshot to evaluate and labels results as OK/Soft/Hard with an "Offline (Cached)" marker and snapshot_version timestamp And saving is allowed for OK and Soft (with required reason codes); Hard is blocked And upon reconnect, all offline-saved visits are re-evaluated; if any become Hard, they are flagged for correction and the scheduler is notified And all offline evaluations, re-evaluations, and resulting actions are logged
Immutable Audit Trail for Validation Decisions
Given any validation, override, exception, or reconciliation event occurs When the event completes Then an append-only audit record is written with: event_id, visit_id, client_id, caregiver_id, payer_id, rule_id, rule_version, result (OK/Soft/Hard), explanation_text, reason_code_id (if any), override_id (if any), exception_id (if any), user_id, role, timestamp, device_id, latency_ms, mode (online/offline), correlation_id And audit records cannot be edited or deleted; corrections produce new records linked via prior_event_id And audit records are queryable in the reporting UI/API by time range, user, payer, rule, and result, and are exportable as CSV
Exception & Reason Code Capture
"As a caregiver, I want to log delays or cancellations with standardized reason codes and evidence so that my schedule stays compliant and audit-ready."
Description

Provide standardized exception handling for holidays, client cancellations, weather events, provider outages, and facility lockdowns using a payer-mapped reason code taxonomy. Maintain region-aware holiday calendars and integrate with a weather service to suggest valid weather exceptions based on time and geolocation. Require selection of reason codes (and optional attachments like photos, voice notes, GPS stamps) when overriding soft stops or requesting hard-stop exemptions. Store all exception metadata with the visit, including evidence and approver, to produce audit-ready documentation. Ensure codes are configurable per payer and exportable for compliance reviews.

Acceptance Criteria
Region-Aware Holiday Exception Suggestion
Given a visit scheduled in a specific service region and local timezone And the visit date matches a recognized holiday on the region-aware holiday calendar And the visit’s payer has at least one mapped Holiday reason code When a scheduler attempts to move the visit outside normal spacing but within the payer’s holiday grace window Then the system auto-suggests only payer-allowed Holiday reason codes And displays the holiday name, date, and source calendar And labels the constraint as soft or hard stop per payer rules And proposes allowed catch-up patterns per payer configuration And does not suggest a holiday exception if the date is not a local holiday or no payer-mapped codes exist
Weather-Based Exception Suggestion via Geolocation and Time
Given the visit has a geolocation within 2 km of the client’s service address And the weather service reports a severe weather event meeting configured thresholds within ±3 hours of the visit time at that geolocation And the visit’s payer has at least one mapped Weather reason code When the scheduler opens the exception dialog or attempts an override within grace policies Then the system suggests only payer-allowed Weather reason codes And pre-fills evidence with weather event type, timestamp, provider, and event ID And allows optional attachments (photo, voice note, GPS stamp) And does not suggest Weather codes if no qualifying event is detected in time/location window And records the suggestion event in the visit audit log
Mandatory Reason Code on Soft-Stop Override
Given a scheduling action triggers a soft stop per payer rules When the user selects Override soft stop Then the system requires selection of a payer-mapped reason code before save And disables the primary action until a valid code is selected And displays a validation message if save is attempted without a code And allows optional attachments (photo, voice note, GPS stamp) And upon save, persists reason code ID, category, user ID, timestamp, and attachment metadata on the visit And writes an immutable audit entry for the override
Hard-Stop Exemption Request Workflow
Given a scheduling action violates a hard stop per payer rules When the user submits an exemption request Then the system requires a payer-mapped reason code selection And captures optional evidence attachments and notes And routes the request to the configured approver group for that payer And blocks the schedule change until an approval decision is recorded And on approval, applies the schedule change and marks the visit with approver ID, decision, and timestamp And on denial, keeps the schedule unchanged and displays the denial reason And records the full workflow in the visit’s audit trail
Payer-Specific Reason Code Configuration and Mapping
Given a compliance admin manages reason codes for a payer When the admin creates, edits, or deactivates a reason code Then they can set code ID, display label, category (holiday, weather, client cancellation, provider outage, facility lockdown, other), applicability rules, allowed evidence types, effective dates, and region scope And the system validates code ID uniqueness per payer and at least one category assignment And changes are versioned and time-effective And inactive/out-of-window codes are hidden from user selection and suggestions And mappings immediately influence suggestions and validations for that payer And admins can bulk import/export payer reason codes (CSV/JSON) with validation results
Audit-Ready Exception Metadata & Export
Given visits contain exceptions with reason codes and evidence When a user generates a compliance export for a payer and date range Then the export (CSV and JSON) includes for each exception: visit ID, client ID, caregiver ID, payer ID/name, reason code ID/label/category, soft/hard stop flag, exemption approved flag, approver ID and timestamp (if applicable), evidence references (photo/audio URLs with hashes), weather event ID (if any), holiday ID (if any), geolocation (if captured), user notes, creator ID, created/updated timestamps (ISO 8601 with timezone) And each exception record links back to its visit ID and is retrievable via API And filters by payer and date range are applied accurately And the export excludes inactive test data per environment settings And the export completes for up to 50,000 records within 60 seconds
Client Cancellation, Provider Outage, and Facility Lockdown Evidence Rules
Given a user selects a reason code in the Client Cancellation, Provider Outage, or Facility Lockdown categories When capturing the exception Then Client Cancellation requires at least one of: typed note or voice note And Provider Outage requires outage start/end timestamps and provider name And Facility Lockdown requires facility identifier and lockdown start timestamp, with optional photo And the UI dynamically enforces required fields per code configuration And save is blocked until required evidence is provided And captured data is persisted on the visit and included in compliance exports
Compliant Catch-up Suggestions
"As a scheduler, I want the system to propose compliant catch-up options so that I can quickly recover missed visits without violating payer rules."
Description

When a visit is missed, canceled, or out of tolerance, generate ranked, compliant rescheduling options that satisfy payer grace windows, min/max spacing, frequency limits, and client preferences. Consider caregiver availability, existing routes, travel times, and overtime thresholds to minimize operational impact. Present multiple patterns (same-day, next allowed day, redistributed across the week) with scores and rule rationales, and support one-click apply to update schedules and notify stakeholders. Respect holidays and exceptions, and re-validate choices in real time. Provide APIs to trigger suggestions from external workflows.

Acceptance Criteria
Missed Visit: Ranked, Compliant Catch‑up Suggestions
Given a visit is marked Missed, Canceled, or Out of Tolerance for client X with payer Y And Y’s grace windows, min/max spacing, and frequency rules are configured And client X preferences (preferred days/times, do-not-disturb windows) are configured When the scheduler opens Catch‑up Suggestions for the visit Then the system returns 3–10 ranked options when at least 3 compliant options exist, otherwise all compliant options And every option is compliant with Y’s rules and client X preferences (no hard‑stop violations included) And each option includes: start time, date, caregiver, pattern type (same‑day/next‑allowed/redistribute), score (0–100), and rule rationales explaining compliance And processing time is ≤ 2 seconds for the 95th percentile and ≤ 4 seconds for the 99th percentile And if no compliant option exists, the UI displays “No compliant options” with the top 3 blocking rules
Multi‑Pattern Proposals: Same‑Day, Next Allowed Day, and Redistributed Across Week
Given at least one compliant same‑day, one next‑allowed‑day, and one redistribution option exist When suggestions are generated Then at least one option per available pattern type is shown and clearly labeled And redistribution options preserve weekly/authorization frequency and min/max spacing constraints And next‑allowed‑day options respect the earliest permissible date and avoid disallowed days And same‑day options respect caregiver availability and route feasibility between adjacent visits with a configurable minimum 15‑minute buffer
Operational Impact Minimization with Routing and Overtime Constraints
Given caregiver availability, existing routes, travel times, and overtime thresholds are configured When the system computes scores Then options that exceed overtime thresholds are excluded unless policy allows soft‑stop inclusion; such options are flagged “Overtime Risk” And additional travel time (in minutes) and detour distance (in km/mi) are calculated per option And scores penalize added travel time and overtime risk so that the top‑ranked option has the lowest operational cost among compliant options (ties broken by preference fit, then earliest completion) And no option double‑books a caregiver or violates mandated rest/turnaround rules
One‑Click Apply: Schedule Update, Notifications, and Audit Trail
Given a scheduler selects a suggestion and clicks Apply When the system processes the change Then the reschedule is executed atomically (all related updates succeed or none do) And caregiver routes are updated and synced to mobile within 30 seconds And notifications to caregiver and client are sent per preferences (push/email/SMS) within 60 seconds with a clear change summary And an audit record is created capturing reason code, rule‑set version, selected option details, user ID, timestamp, and pre/post schedule snapshot And on any failure, changes are rolled back and an actionable error with retry is presented
Real‑Time Re‑Validation on Changing Constraints
Given suggestions are displayed to the scheduler When any relevant constraint changes (e.g., new booking, rule update, holiday addition, caregiver availability change) Then the system re‑validates the displayed suggestions within 5 seconds And options that become invalid are removed or marked “Now Non‑Compliant,” and Apply is disabled for them And if an option becomes non‑compliant during apply, the operation aborts and refreshed suggestions are shown And for already‑applied catch‑up visits that become non‑compliant before start time, the scheduler is alerted and a new suggestion set is generated
Holiday and Exception Handling with Reason Codes
Given a holiday or agency blackout day intersects the catch‑up window When generating suggestions Then dates falling on holidays/blackout days are excluded unless an allowed exception reason code is provided And suggestions using an exception include the selected reason code and are marked accordingly in the audit trail And holiday calendars are applied based on the client’s service location And weather or client‑initiated cancellations extend grace per payer policy and this is reflected in the rule rationale
External API Trigger for Suggestions
Given an authorized external system calls POST /v1/suggestions/catchup with visit_id and optional constraints When the request is valid and the visit is eligible Then the API responds 200 with a list of options including: option_id, pattern, datetime, caregiver_id, score, compliance_flags, rule_rationales, route_metrics, and preview_notifications And 95th percentile latency is ≤ 1000 ms and 99th percentile ≤ 2000 ms And authentication uses OAuth 2.0 client credentials; absent/invalid token yields 401 and insufficient scope yields 403 And Idempotency-Key ensures safe retries producing identical results for the same input within 24 hours And if no compliant options exist, the API returns 422 with machine‑readable blocking reasons and a correlation_id
Visual Grace Windows & Spacing on Schedule
"As a scheduler, I want to see grace windows and spacing constraints directly on the calendar so that I can adjust times confidently without causing non-compliance."
Description

Enhance calendar, list, and route views to visualize grace windows as time bands around visits and indicate min/max spacing constraints between successive visits. Use color-coded compliance states (ok, soft risk, hard block) and inline tooltips that explain the active rule and remaining buffer. Support drag-and-drop with continuous compliance feedback and snap-to-allowed windows on mobile and web. Include accessibility accommodations (contrast, screen reader labels) and user preferences for visualization density. Ensure visuals remain accurate offline using cached rules and sync updates when online.

Acceptance Criteria
Calendar Grace Window Bands and Compliance States
Given a payer rule defines pre- and post-grace minutes for a scheduled visit When the user opens the calendar day or week view for that client Then a shaded band renders from scheduled start minus pre-grace to scheduled start plus post-grace, with visible start/end timestamps And the visit chip/border uses the computed compliance state color and icon: Ok=green+check, Soft Risk=amber+warning, Hard Block=red+lock And band and state recompute and visually update within 300 ms after schedule/time changes And visits with no applicable grace rule show no band and default to Ok unless spacing or exceptions alter the state
List View Spacing Indicators Between Successive Visits
Given successive visits for the same client/service have min/max spacing constraints When the list view is displayed Then each successive pair shows a spacing indicator with actual interval (minutes/hours), min, and max values And if interval satisfies both min and max, indicator is green+check And if interval is within the smaller of 10% of a boundary or 15 minutes from violating min or max, indicator is amber+warning And if interval violates min or max, indicator is red+lock And hovering/focusing the indicator shows a tooltip with “Min X, Max Y, Actual Z, Earliest Allowed A, Latest Allowed B”
Route Drag-and-Drop with Real-Time Compliance and Snap
Given the route view is open on web or mobile When the user drags a visit to a new time Then the tentative compliance state (Ok/Soft Risk/Hard Block) updates within 150 ms of pointer movement And if the drop location would be a Hard Block, releasing returns the visit to its last valid time and shows a red lock toast with the violating rule And if released within 5 minutes of the nearest allowed boundary, the visit snaps to that boundary (mobile provides haptic feedback on snap) And if no allowed time exists on the same day, the drop is prevented and a modal suggests the next allowed window on adjacent days
Inline Rule and Buffer Tooltips
Given a user hovers, focuses, or long-presses a visit or spacing indicator When the tooltip appears Then it shows payer name, rule title, allowed window start/end timestamps, remaining minutes to hard stop, and any active exception reason code And if the state is Soft Risk, it shows the soft threshold minutes and the condition causing risk And the tooltip appears within 200 ms of invocation and dismisses within 100 ms after focus/hover ends
Accessible Visualizations and Controls
Given schedule views are rendered Then all text and key UI elements meet WCAG 2.1 AA contrast (text ≥4.5:1, large text/icons ≥3:1) And compliance states use non-color cues (icons/patterns) in addition to color And keyboard-only users can move visits (Space to pick up, Arrow keys to move in 5-minute increments, Enter to drop, Escape to cancel) with ARIA drag-and-drop roles And screen reader users receive ARIA live announcements of compliance state and reason within 500 ms when a state changes And tooltips expose descriptions via aria-describedby including remaining buffer and rule summary
Visualization Density Preferences
Given a user opens Settings > Schedule When the user sets Visualization Density to Compact, Standard, or Expanded Then row heights, band thickness, and label visibility adjust accordingly (Compact reduces vertical spacing by ≥30% vs Standard; Expanded increases by ≥20%) And the change applies across calendar, list, and route views within 1 second without altering compliance calculations And the preference persists across sessions and syncs to the same user on other devices within 10 seconds of sign-in
Offline Accuracy with Cached Rules and Sync Refresh
Given the device is offline When the user views or edits schedules Then grace bands, spacing indicators, and compliance states are computed from the most recent cached rules and an “Offline — rules last updated <timestamp>” badge is shown And if cached rules are older than 72 hours, a non-blocking yellow banner warns that rules may be stale When connectivity is restored and new rules are synced Then visuals recompute within 5 seconds; any visit whose state changes to Hard Block is flagged with a red toast describing the updated rule And offline drag-and-drop changes are revalidated; invalid placements are rolled back to the nearest allowed time with a notification
Audit-Ready Compliance Reports
"As a compliance officer, I want detailed, payer-ready compliance reports with full provenance so that audits are fast and successful."
Description

Generate payer-specific, one-click compliance reports that summarize on-time performance, violations by type and severity, exceptions with reason codes and evidence, and resolution timelines. Include the exact rules engine version, rule IDs applied, and decision logs for each variance to ensure reproducibility. Provide filters by date range, payer, client, caregiver, and location with export to PDF/CSV and secure sharing links. Protect report integrity with immutable timestamps and tamper-evident hashes. Schedule automated delivery to stakeholders and support ad hoc drill-down from summary to visit-level detail.

Acceptance Criteria
One-Click Payer-Specific Report Generation
Given a user selects a date range, payer, and optional client/caregiver/location filters When the user clicks Generate Report Then a compliance report is produced within 10 seconds for datasets up to 10,000 visits And the report header displays the selected filters, payer, UTC generation timestamp, local timezone, and a unique report ID And the metadata block includes the rules engine semantic version and the exact list of rule IDs applied And each variance row references its decision log ID to ensure traceability And re-running the report with identical inputs and the same rules engine version yields identical counts and content
Filtering Accuracy and Scope
Given a dataset containing multiple payers, clients, caregivers, and locations within the selected date range When the user applies any combination of the date, payer, client, caregiver, and location filters Then only visits matching all active filters are included in counts, summaries, and drill-down tables (logical AND) And removing a filter immediately updates results to reflect the broader scope And selecting a payer excludes visits from non-selected payers (no cross-payer leakage) And applying or changing filters updates the on-screen results in under 2 seconds for datasets up to 10,000 visits
Violation and Exception Summaries Accuracy
Given the dataset includes on-time visits, violations categorized by type and severity (e.g., grace window breaches, spacing violations), and exceptions (holidays, client cancellations, weather) with reason codes and evidence When the compliance report is generated Then the summary shows on-time percentage and totals by violation type and severity, distinguishing soft vs hard stops And exceptions are listed with standardized reason codes and links to attached evidence (e.g., audio clip, sensor data, document) And each violation shows detection timestamp and resolution timestamp; the summary includes median and average resolution time And the totals in the summary reconcile exactly with the grouped counts in drill-down views (no discrepancies)
Drill-Down to Visit-Level Detail and Decision Logs
Given the user is viewing the summary compliance report When the user clicks a metric, chart segment, or table row Then a visit-level view opens filtered to that selected segment And each visit row displays applied rule IDs, rule outcomes, timestamps, reason code (if any), and a link to the full decision log steps And available evidence (e.g., voice clip, sensor reading, cancellation note) is previewable and downloadable with file type and size shown And navigating back returns the user to the prior summary context and scroll position
Export to PDF and CSV with Tamper-Evident Integrity
Given a compliance report is generated with any filters When the user exports to PDF or CSV Then the exported file contains the same data and filters as on screen And the footer includes an immutable generated_at timestamp (UTC), report ID, and a SHA-256 content hash of the payload And the API returns the same hash alongside the export response And exporting the same report without changes produces the identical hash; any content change results in a different hash
Secure Sharing Links with Access Controls
Given a user creates a secure sharing link for a generated report When the user sets an expiration time and (optionally) restricts to specific recipient emails Then the system generates a non-guessable URL that expires at the configured time and can be revoked immediately And recipients without authorization receive a 403 response and no data is leaked via error messages or headers And all link accesses are logged with timestamp, IP, user/recipient identity (if known), and user agent for audit purposes
Scheduled Automated Delivery and Notifications
Given a user configures an automated delivery with recurrence (daily/weekly/monthly), delivery time, time zone, recipients, format (PDF/CSV), and filters When the scheduled time occurs Then the system generates the report with the saved filters and delivers it within 5 minutes to all recipients And delivery includes a secure sharing link and (optionally) the file attachment per configuration And failures are retried up to 3 times with exponential backoff and an alert is sent to the owner on final failure And a delivery log records success/failure, attempt count, and links to the generated report instance

Visit Bank

Tracks carryover eligibility and rules to bank or make up missed visits when allowed, preventing unauthorized “borrowing” when it’s not. Suggests compliant make‑up sequences and logs a clean ledger of banked vs. used visits per payer. Agencies preserve revenue while staying within policy lines.

Requirements

Payer Rules Engine
"As an operations manager, I want payer carryover rules encoded and versioned so that eligibility is validated automatically and consistently across scheduling and reporting."
Description

Build a configurable engine to encode payer-specific visit banking policies, including whether carryover is allowed, permissible carryover windows (e.g., same week, same month, within authorization period), maximum bankable visits, expiration rules, cross-discipline allowances, daily/weekly caps, provider type restrictions, and required documentation codes. Support effective dating and versioning of rules, provenance links to policy sources (attachments/URLs), and a low-code admin UI for operations to update rules safely. Expose validation services used by scheduling, documentation, and reporting, with real-time evaluation and clear, human-readable explanations of pass/fail outcomes. Provide a sandbox to test rule changes against historical data before publishing and feature flags to roll out per payer or branch.

Acceptance Criteria
Admin Configuration of Payer Ruleset
Given I am an Operations Admin with RULES_EDIT access And payer "Payer A" exists When I create a ruleset with: carryoverAllowed=true carryoverWindowType="SameWeek" maxBankableVisits=2 expirationDays=14 crossDisciplineAllowed=false dailyCap=1 weeklyCap=3 allowedProviderTypes=["RN","LPN"] requiredDocumentationCodes=["MU1"] policySourceLinks=["https://payerA/policy.pdf"] effectiveStart=2025-09-15 Then the system validates enumerations and ranges and blocks invalid inputs with inline errors And saves the ruleset as Draft with version "v1", creator, timestamp, and change note And displays a diff against the last Published version (or empty if none)
Real-Time Validation API Evaluates Banking Eligibility
Given a missed visit on 2025-09-16 for patient X with payer "Payer A" and discipline "PT" And providerType="RN" and documentationCodes=["MU1"] And the Published ruleset for Payer A is v1 with carryoverWindowType="SameWeek", maxBankableVisits=2, dailyCap=1, weeklyCap=3, crossDisciplineAllowed=false When the Validation API is called with requestId "req-123", branchId "BR-1", and eventDate 2025-09-17 Then the API returns 200 with outcome="PASS", ruleVersion="v1", bankableCountRemaining reported, expirationDate computed per rules, and capsRemaining reflecting daily/weekly usage And includes explanation text <= 300 chars and machine-readable violations=[] And completes within p95 <= 300ms and p99 <= 500ms over 10k requests
Effective Dating, Versioning, and Non-Overlapping Publication
Given Payer A has a Published ruleset v2 effectiveStart=2025-07-01 effectiveEnd=null And a Draft v3 exists with effectiveStart=2025-09-15 When I attempt to publish v3 Then the system requires setting v2.effectiveEnd=2025-09-14 to prevent overlap And after publish, evaluations use v2 for events dated <= 2025-09-14 and v3 for events dated >= 2025-09-15 And historical versions become read-only and auditable (actor, timestamp, change note) And overlapping date ranges are blocked with an explicit error message
Sandbox Simulation Against Historical Data
Given a Draft ruleset v3 for Payer A And a selectable historical dataset of visits from 2025-06-01 to 2025-08-31 When I run a sandbox simulation Then the system evaluates v3 vs the current Published version and produces a diff report including passCountDelta, failCountDelta, estimatedRevenueImpact, and top 50 changed cases And no production ledgers, schedules, or notes are modified And the run completes within 15 minutes for up to 100k visits and is downloadable as CSV And only users with RULES_EDIT or RULES_PUBLISH can run simulations
Feature-Flagged Rollout per Payer/Branch
Given a Published ruleset v3 for Payer A is behind feature flag "rules.v3" When I enable the flag for branches ["BR-1","BR-2"] Then scheduling, documentation, and reporting services for those branches use v3 for evaluations within 5 minutes And branches without the flag continue using the previous Published version And disabling the flag reverts within 5 minutes with no data loss And all flag changes are logged with actor, timestamp, scope, and before/after values
Explanations and Provenance in Validation Outcomes
Given an attempt to bank a visit that violates dailyCap=1 for 2025-09-17 When the Validation API responds Then outcome="FAIL" And explanation includes a human-readable sentence naming the violated rule, the actual count, the allowed cap, and the relevant date (e.g., "Daily cap exceeded: 2 of 1 on 2025-09-17") And the response includes ruleId, ruleVersion, and policySourceLinks[] with at least one accessible URL or attachmentId And requiredDocumentationCodesMissing is returned when codes are absent
Cross-Discipline, Provider-Type, and Cap Enforcement
Given crossDisciplineAllowed=false and allowedProviderTypes=["RN","LPN"] and weeklyCap=3 And the patient has 2 banked PT visits this week and 1 OT banked visit And providerType="CNA" When the Validation API evaluates a PT make-up on 2025-09-18 Then outcome="FAIL" with violations including "PROVIDER_TYPE_NOT_ALLOWED" and, if PT attempts > 3, "WEEKLY_CAP_REACHED" (ignoring OT due to cross-discipline disallow) And if providerType="RN" and total PT banked < 3, outcome="PASS"
Eligibility & Accrual Calculator
"As a scheduler, I want to see an accurate, up-to-date banked visit balance with expiration dates so that I can plan compliant make-up visits without manual calculations."
Description

Implement a computation service that determines which missed visits qualify to be banked, calculates available/pending/expired balances, and attaches expiration dates per patient–payer–authorization–service line. Account for edge cases such as partial visits, cancellations, patient refusals, rescheduled visits within window, authorization changes, and retroactive documentation updates. Perform incremental recalculation on relevant events (visit completion, cancellation, rule change) and write results to a fast, queryable store. Surface current balance and expiry countdown within patient header, scheduler views, and the mobile app, with consistent totals across devices and offline-safe caching.

Acceptance Criteria
Eligibility Evaluation for Missed and Partial Visits
- Given a missed visit marked "Canceled by patient" and payer rule patient_refusal_bankable=true with a banking window of 14 days, When the calculator runs, Then it marks the visit bankable with 1.0 unit and sets expiration_date = scheduled_date + 14 days (payer timezone). - Given a visit completed at 45% of authorized duration and payer rule min_partial_percent=50 and partial_credit_units=0.5, When the calculator runs, Then it allocates 0.0 units and records reason_code="UNDER_MIN_DURATION". - Given a visit completed at 55% with the same payer rule, When the calculator runs, Then it allocates 0.5 units and sets expiration_date per rule configuration. - Given a visit rescheduled and completed within the payer-configured window, When the calculator runs, Then it sets original miss status="RESOLVED_IN_WINDOW" and allocates 0.0 units. - Given a provider no-show and payer rule caregiver_no_show_bankable=false, When the calculator runs, Then it allocates 0.0 units with reason_code="PROVIDER_NO_SHOW_NOT_ELIGIBLE". - Given a missed visit on service_line_id A and available balance exists only on service_line_id B, When the calculator runs, Then it allocates 0.0 units and records reason_code="CROSS_SERVICE_BORROWING_NOT_ALLOWED".
Accrual Balances and Expirations per Patient–Payer–Authorization–Service Line
- Given multiple bankable events for the same composite key (patient_id+payer_id+authorization_id+service_line_id), When the calculator runs, Then it computes and persists balances.available, balances.pending, and balances.expired as non-negative decimals with precision per payer-configured unit_increment. - Given banked units from visits whose documentation_status != "FINALIZED", When the calculator runs, Then those units are categorized as pending and are excluded from available until documentation is finalized. - Given available units with varying expiration_date values, When the calculator runs, Then it applies FIFO by earliest expiration when suggesting or applying make-up usage and exposes next_expiration_at accordingly. - When current datetime crosses expiration_date at 00:00:00 in the payer-configured timezone, Then affected units move from available/pending to expired on the next recalculation and balances reflect the change. - When an authorization’s effective date range changes, Then the calculator re-evaluates affected banked units, rebinding to the correct authorization if still valid or moving to expired with reason_code="AUTH_WINDOW_CLOSED" if now out of range.
Incremental Recalculation on Relevant Events
- Given any of these events: visit completion, visit cancellation, documentation finalized/unfinalized, payer rule change, authorization change, retroactive edit to visit time or service line, When the event is received, Then only the affected composite keys are recalculated. - Given a single-patient event, When recalculation runs, Then updated balances and ledger are written within 2 seconds p95 and 5 seconds p99 end-to-end. - Given a duplicate delivery of the same event_id, When recalculation runs, Then results are idempotent with no duplicate ledger entries and no net balance change. - Given a transient processing failure, When recalculation runs, Then it retries up to 3 times with exponential backoff and surfaces an error if still unsuccessful, leaving prior balances unchanged.
Persistence and Query Performance Targets
- When writing recalculation results (balances and ledger entries), Then the operation is atomic and versioned so that subsequent reads return a consistent snapshot. - Given a patient header balances query by composite key, When executed under a tenant with up to 50k patients, Then it returns within 100 ms p95 and 300 ms p99. - Given queries filtered by patient_id, payer_id, authorization_id, service_line_id and ordered by expiration_date asc, When executed, Then they return correct results and use appropriate indexes. - Given a transient data store outage during write, When detected, Then no partial writes occur, last-cached values are served with stale=true, and an error is logged with correlation_id.
Cross-Device Consistency and Offline Caching
- Given the same as_of timestamp, When the web patient header, web scheduler, and mobile app fetch balances for the same composite key, Then they display identical available and expired totals and the same earliest_expiration countdown in whole days. - Given a payer-configured timezone for expiration, When countdowns are displayed in a user’s local timezone, Then the day rollover occurs within 60 seconds after midnight in the payer timezone and the displayed days_remaining decrements by 1. - Given the mobile app is offline, When balances are requested, Then it shows last_synced balances with an "as of" timestamp and prevents actions that would result in available < 0; upon reconnection, it syncs and updates within 10 seconds. - Given a recalculation event occurs while multiple clients are connected, When polling or receiving push updates, Then all clients reflect the new balances within 10 seconds p95 and 20 seconds p99.
Audit Ledger Integrity and Compliance Controls
- Given any balance-affecting change, When applied, Then an append-only ledger entry is created with fields: composite_key, event_type, source_event_id, reason_code, units_delta, pre_balance, post_balance, expiration_date, actor_id, occurred_at, created_at. - Given a correction is required, When applied, Then it is recorded as a compensating entry reversing the prior units_delta and labeled with reason_code="REVERSAL"; prior entries remain immutable. - Given a ledger export by date range and payer, When units_delta is summed and reconciled with current balances, Then the difference equals 0.0 within a tolerance of 0.0 units. - Given an attempt to bank or apply units violates payer rules, When processed, Then the action is denied, balances remain unchanged, and a DENIED ledger entry with units_delta=0 and a specific reason_code is recorded for audit.
Make-up Visit Sequencer
"As a dispatcher, I want the system to suggest compliant make-up visit sequences so that I can fill routes efficiently while preventing policy violations."
Description

Provide an intelligent suggestion engine that proposes compliant make-up visit sequences that prioritize soonest-to-expire balances while minimizing caregiver travel and respecting patient availability, caregiver qualifications, daily/weekly caps, and payer rules. Integrate directly into the scheduler to offer one-click insertion of suggested visits into routes, with conflict detection and resolution tips. Include a what-if mode to simulate different scheduling choices and lock selected sequences. Expose an API for optimization runs and support graceful degradation to heuristic suggestions when optimization services are unavailable or offline.

Acceptance Criteria
Compliant Sequence Suggestion with Expiry-First Prioritization
Given a set of eligible missed visits with varying expiry dates, patient availability windows, caregiver qualifications, configured daily/weekly caps, and payer rules When the sequencer is run for a specified caregiver and date range Then a sorted list of suggested make-up sequences is returned within 5 seconds for up to 100 candidate visits across 20 patients And the list is sorted primarily by earliest balance expiry date and secondarily by total travel time ascending And every visit in every suggested sequence satisfies patient availability, caregiver qualification, payer make-up window, and configured daily/weekly cap constraints And each suggestion includes total_travel_minutes, total_travel_km, per-visit compliance rationale, and per-visit expiry date used for prioritization And if no compliant sequence exists, the response returns an empty list and a reason code with blocking rule_ids
One‑Click Insertion with Conflict Detection and Resolution Tips
Given a user selects a suggested make-up sequence for a caregiver When the user clicks "Insert into route" Then the system validates feasibility (time overlaps, drive time feasibility, location, caps) before commit and displays any conflicts with categorized reasons And the user is offered automated resolution tips (e.g., reorder stops, shift within configured tolerance, choose alternate day within availability) and can apply fixes And the operation is atomic: either all visits are scheduled or none if any conflict remains unresolved And on success, the caregiver route is updated with planned times and recalculated travel, and the suggestion is marked as scheduled And the insertion completes within 2 seconds for sequences up to 10 visits
What‑If Simulation and Sequence Locking
Given a scheduler opens What‑If mode with adjustable constraints (date range, max daily visits, travel limit, caregiver filters) When the user runs simulations Then alternative sequences are generated without altering the live schedule and are labeled with scenario_id and parameters used And the user can lock a chosen sequence, which prevents its underlying make-up balances from being consumed elsewhere until unlock or expiry And a lock token is returned and required for insert or unlock operations And locked sequences have a configurable TTL and show remaining time; expired locks release balances automatically And attempts to modify or insert a locked sequence without the valid token are rejected with 403 and a reason code
API Optimization Run with Graceful Degradation
Given the endpoint POST /v1/optimization/make-up-sequences with a valid request payload (caregiver_id, date_range, constraints) When the API receives a request Then it validates schema and returns 200 with suggestions matching schema v1 including fields optimized=true and degraded=false when the optimization service responds within 3 seconds And if the optimization service is unavailable or times out after 3 seconds, the API returns heuristic suggestions with optimized=false, degraded=true, and header X-Degraded-Reason set And every response includes request_id, generated_in_ms, and a list of applied constraints; total response time does not exceed 6 seconds And invalid payloads return 422 with field-level errors; if no heuristic path exists, return 503 with a retry-after header
Rule Compliance Gate: Payer Policies, Caps, and Qualifications
Given payer policies (make-up windows, carryover eligibility, borrowing prohibitions), configured daily/weekly caps, and caregiver qualification matrices When candidate visits are evaluated for inclusion in a suggested sequence Then any candidate that would violate a rule is excluded from suggestions And each suggested visit carries rule_check_status=pass and an array of applied rule_ids with brief rationales And cross-payer sequencing is disallowed unless explicitly permitted by both payer rules; sequences default to being grouped per payer And cap calculations include already scheduled visits in the period and any tentatively locked sequences to prevent over-allocation
Audit Trail and Ledger Update on Insert/Lock
Given a sequence is locked or inserted from the sequencer into the live schedule When the operation completes Then an immutable audit event is recorded with actor, timestamp, caregiver_id, patient_ids, payer_ids, rule_ids applied, and decision rationale And the Visit Bank ledger is updated atomically per payer: provisional debit on lock, finalize on insert, reverse on unlock, with before/after balances stored And ledger entries are append-only with unique ledger_id and are queryable by payer and date range And if the ledger write fails, no schedule changes are committed and a clear error is returned with correlation request_id
Banked Visit Ledger & Audit Trail
"As a compliance officer, I want a complete, tamper-evident ledger of banked and used visits so that I can demonstrate adherence to payer policies during audits."
Description

Create an immutable ledger that records every bank, use, expiration, and adjustment event with timestamps, actor, source event (e.g., missed visit, override approval), rule version applied, before/after balances, and links to underlying artifacts (visit notes, voice clips, GPS). Support reversible adjustments via compensating entries with required reason codes and approver identity. Provide sortable, filterable views at patient, payer, branch, and agency levels and enable export in audit-ready formats. Ensure entries are tamper-evident and traceable for compliance audits.

Acceptance Criteria
Ledger Entry Creation with Required Fields
Given an eligible bank, use, expiration, or adjustment event occurs When the system records the ledger entry Then the entry includes: immutable entry ID, UTC timestamp (to the second), actor identity (userId/serviceId), source event type and reference ID, patient ID, payer ID, branch ID, rule version ID, event amount, and before/after bank balances And links to applicable artifacts are captured (visit note ID, voice clip ID, GPS trace ID); non-applicable artifacts are explicitly marked N/A And the write is idempotent per source event reference ID so retries do not create duplicates And p95 ledger write latency is ≤ 2 seconds under normal load
Compensating Adjustments and Reversibility Controls
Given a previously posted entry requires correction When a user with role “Compliance Approver” submits a compensating adjustment Then the system requires a reason code from a controlled list and a free-text rationale of at least 15 characters And captures approver identity and approval timestamp And original entries are never edited or deleted; net effect is achieved only via additive compensating entries And an undo/reverse action creates a new compensating entry referencing the prior entry ID, producing a net-zero effect when appropriate And attempts by unauthorized roles are rejected with HTTP 403 and no ledger mutation
Tamper-Evidence and Integrity Verification
Given the ledger has multiple entries When the integrity verification runs (on-demand and nightly) Then each entry’s content hash includes the previous entry’s hash to form an unbroken chain; any mismatch flags the ledger as compromised And an alert is emitted to compliance channels within 5 minutes of detection And export/API responses include a signed checksum for the returned range that clients can validate And any direct datastore mutation that bypasses the application layer is detectable because signature verification fails and is logged with timestamp, environment, and actor (if identifiable)
Multi-Level Views with Sort and Filter
Given a user opens the ledger view at patient, payer, branch, or agency scope When they apply filters (date range, event type, rule version, actor, payer, patient, branch) and change sort order (timestamp, event type, amount, before balance, after balance, actor) Then only matching entries are returned with correct totals And default sort is timestamp descending; changing sort updates the grid correctly And p95 response time is ≤ 1s for cached pages and ≤ 3s for first-page queries up to 10,000 rows And pagination supports page sizes 25/50/100 with accurate total count and page navigation And users can save and reapply view presets per scope
Audit-Ready Export with Metadata
Given a user has applied filters and a scope to the ledger When they trigger Export Then the system generates CSV and PDF outputs within 30 seconds for up to 50,000 rows And each export contains header metadata: export UTC timestamp, requesting user, scope, filter summary, record count, dataset hash/checksum, system version, and list of rule version IDs included And each row includes all ledger fields plus hyperlinks/URNs to artifacts (access-controlled) And re-running the same export parameters over the same time window yields identical dataset and checksum unless new entries were posted And the export action itself is recorded in an audit log with requester identity and file references
Rule Version Traceability and Re-Evaluation
Given a bank/use decision is executed under a specific policy When the ledger entry is created Then the rule version ID and human-readable name are captured and stored with the entry And subsequent policy changes do not retroactively alter historical entries And if a re-evaluation is requested under a new policy, a new entry is posted referencing the original entry ID, showing the delta and the new rule version, preserving full traceability
Artifact Link Validation and Access Control
Given a ledger entry references artifacts (visit notes, voice clips, GPS traces) When a permitted user follows a link Then the artifact resolves within the app or downloads successfully, and access is denied (HTTP 403) for unauthorized users without leaking artifact metadata And required artifacts are validated at write time; missing required artifacts cause the ledger write to fail atomically with a descriptive error code And optional artifacts, when not present, are recorded as optional-missing and do not block ledger entry creation
Scheduling Guardrails & Overrides
"As a care coordinator, I want scheduling guardrails with controlled overrides so that the team stays compliant while still handling urgent patient needs responsibly."
Description

Embed real-time guardrails in the web and mobile schedulers to block or warn on unauthorized borrowing or over-scheduling beyond payer caps and carryover windows. Provide clear, actionable messaging explaining the violated rule and showing compliant alternatives. Support role-based overrides with justification, approver workflow, and automatic ledger entries for all overrides. Validate bulk scheduling actions and route optimizations, and function in offline mode with locally cached rules and balances, reconciling on re-connect.

Acceptance Criteria
Real-Time Block on Unauthorized Borrowing with Alternatives
Given a scheduler attempts to create a visit that would require borrowing units/hours from a future period not permitted by the client’s payer When the user taps/clicks Save in the web or mobile scheduler Then the system blocks the save (no visit created/updated) and displays an error banner and modal And the modal includes: payer name, violated rule id/code, plain‑language rule description, coverage period dates, current banked/used balances, and at least 3 compliant alternative dates/times or visit sequences And selecting any alternative updates the draft visit and revalidates within 300 ms average and 1000 ms p95 And no Visit Bank ledger entry is created for the blocked attempt And the blocked attempt is audit‑logged with timestamp, user id, client id, caregiver id, payer id, attempted units, and rule id
Payer Cap and Carryover Window Enforcement on Create/Edit
Given a user creates or edits a visit that would exceed a payer’s weekly/monthly unit cap or fall outside the allowed carryover window When the user attempts to save the visit Then for non‑overridable violations, the system hard‑blocks the save and presents a message showing: cap/window limit, current scheduled/used totals, and the exact overage amount And for overridable violations, the system shows a warning with a “Proceed with Override” action (if the user is permitted) and a “View Compliant Alternatives” action And cap/window calculations include scheduled, pending, banked, and used visits within the relevant period as of the latest sync And behavior is consistent on web and mobile, with validation completing within 500 ms average and 1500 ms p95
Role-Based Override with Justification and Approval Workflow
Given a violation is flagged as overridable by policy When a user with the appropriate role chooses Proceed with Override Then the user must select a reason code and enter a justification of at least 15 characters before continuing And the system determines if an approval is required based on payer rule and org policy; if required, it creates an approval task, notifies the approver group, and sets visit status to Pending Override Approval; if not required, the visit is confirmed immediately And upon approval (or immediate confirmation if approval not required), the system creates an automatic Visit Bank ledger entry with: override id, rule id, payer id, visit id, requester id, approver id (if any), reason code, justification text, units delta, balances before/after, timestamps And if the approval is rejected or times out per SLA, the visit reverts to its pre‑override state and the requester is notified And users without override permission cannot proceed and the attempt is audit‑logged
Bulk Scheduling and Route Optimization Guardrails
Given a user submits a bulk create/copy of visits or applies a route optimization that proposes schedule changes When the batch is submitted Then each proposed visit is validated against payer caps, carryover windows, and borrowing rules And valid visits are saved; invalid visits are not saved and are returned with specific rule violations and compliant alternatives when available And the batch result includes totals for proposed/created/blocked/override‑eligible and a downloadable CSV with per‑visit outcomes And no partial visit records are committed for invalid items (atomic per visit) And batches up to 500 visits complete validation within 10 seconds p95 and 3 seconds median And all batch actions are audit‑logged with a batch id, counts, and rule hit breakdown
Offline Guardrails with Cached Rules and Reconciliation
Given a mobile user is offline with locally cached payer rules and Visit Bank balances synced within the last 24 hours When the user schedules or edits visits offline Then guardrails use the cached data to hard‑block non‑overridable violations and warn on overridable ones And if cache age exceeds 24 hours or required payer data is missing, the system allows only Draft saves, labels the visit Needs Revalidation, and defers any ledger impact And upon reconnect, the system automatically revalidates offline changes; non‑overridable violations are blocked and surfaced in an Offline Reconciliation queue with compliant alternatives; overridable ones can enter the override flow And reconciliation is idempotent and audit‑logged; users are notified of any changes applied or items requiring attention
Visit Bank Ledger Integrity and Audit Reporting
Given an approved override or compliant make‑up visit affects banked/used balances When the change is committed Then exactly one atomic, idempotent Visit Bank ledger entry is written containing: visit id(s), client id, caregiver id, payer id, rule id, action type (bank/use/override), units delta, balances before/after, source (UI/API/Bulk/Offline), actor ids, timestamps And cross‑payer borrowing is prevented; attempts are blocked and logged with rule id And daily ledger totals reconcile exactly with scheduler totals for banked vs. used; any discrepancy triggers an alert and is flagged in the audit report And users can generate an audit‑ready report by date range/payer/client/caregiver showing banked vs. used vs. overrides with totals matching ledger and exportable to CSV
Expiry Alerts & Digests
"As an operations lead, I want timely alerts before banked visits expire so that my team can schedule make-ups and avoid revenue loss and non-compliance."
Description

Deliver proactive notifications when banked visits approach expiration, with configurable thresholds per payer and branch. Send alerts via in-app, mobile push, and email to the responsible scheduler, caregiver lead, and account owner. Provide daily/weekly digests summarizing soon-to-expire balances and one-click actions to schedule suggested make-ups. Support acknowledgement, snooze, quiet hours, and a full alert audit trail, respecting user notification preferences and HIPAA constraints.

Acceptance Criteria
Configure Expiry Thresholds per Payer and Branch
- Given an org admin is on Notification Settings, when they set an Expiry Threshold in whole days at the payer level and optionally override at the branch level, then the system saves the configuration with validation: integer 1–365. - Given both payer and branch thresholds exist, when calculating alert timing, then the branch-level value overrides the payer-level value for that branch. - Given no branch override exists, when evaluating a visit for expiry, then the payer-level threshold applies. - Given invalid input (blank, non-integer, <1, >365), when Save is clicked, then the change is rejected with a clear validation message and no configuration is persisted. - Given a threshold value is changed, when Save succeeds, then future alert evaluations use the new value and a configuration audit record is written with before/after values, user, and timestamp.
Deliver Expiry Alerts to Responsible Roles
- Given a banked visit is within the configured expiry threshold, when the daily evaluation runs at 06:00 branch-local time, then an expiry alert is created for that visit. - Then notifications for that alert are delivered via in-app, mobile push, and email to the assigned scheduler, caregiver lead, and account owner for the case. - Then recipient filtering respects per-user notification preferences per channel; if a channel is disabled for a user, that channel is not sent to that user while the in-app alert remains. - Then email and push payloads include minimum necessary information only (patient initials, payer, branch, days-to-expiry, visit count, secure link) and exclude full PHI. - Then each notification is sent within 5 minutes of alert creation; transient failures are retried up to 3 times with exponential backoff; permanent failures are logged in the alert audit. - Then recipients do not receive more than one notification for the same visit at the same threshold within a 24-hour window (per user, per channel).
Generate Daily and Weekly Expiry Digests with One‑Click Actions
- Given daily and weekly digest schedules are enabled, when the configured send time occurs (per branch), then a digest is generated and sent via email and in-app summary to schedulers, caregiver leads, and account owners with access to that branch/payer. - Then the digest lists banked visits expiring within each payer/branch’s threshold window, grouped by payer and branch, sorted by soonest expiry, with totals and counts per group. - Then each line item includes a one-click Schedule Make‑up action that opens the Visit Bank scheduling flow pre-filtered for that patient/payer; clicks are tracked in the audit trail. - Then if quiet hours are active for a recipient at the scheduled time, the digest notification is delayed until quiet hours end for that recipient. - Then digest generation completes within 60 seconds per branch; failures are logged and retried within 15 minutes; partial successes identify which recipients were skipped or failed.
Manage Alert Actions: Acknowledge, Snooze, Quiet Hours
- Given a user views an expiry alert, when they click Acknowledge, then the alert is marked Acknowledged for that user, subsequent duplicate notifications for the same visit/threshold are suppressed for that user until the visit is scheduled or expires, and the action is audit logged. - Given a user selects Snooze and chooses a duration between 1 hour and 7 days, then notifications for that alert are suppressed for that user on all channels until the snooze expires; the snooze can be cancelled; all changes are audit logged. - Given user-specific quiet hours are configured, when an alert would be delivered during quiet hours, then push and email delivery are suppressed and queued until quiet hours end; the in-app alert remains accessible without a real-time toast. - Given any action (acknowledge, snooze start/end, cancel) occurs, then an optional comment (max 500 characters) can be saved with the action and appears in the audit trail.
Maintain Full Alert Audit Trail and Respect Notification Preferences
- Given any alert lifecycle event occurs (created, delivery attempted/succeeded/failed by channel, viewed, acknowledged, snoozed, link clicked, scheduled via action), then an immutable audit record is stored with timestamp, actor (user/system), channel, outcome, and request metadata (IP and user-agent for user events). - Then admins can view audit history in-app and export it to CSV by date range and branch; exports exclude PHI beyond minimum necessary and include alert IDs for traceability. - Then user notification preferences support per-channel (in-app mandatory, push optional, email optional) and per-alert-type toggles; changes take effect within 5 minutes and are audit logged. - Then all outbound email/push links require authentication and expire within 24 hours; logs and notifications store/display patient initials and internal IDs only to comply with HIPAA minimum necessary.
One‑Click Make‑Up Scheduling from Alert/Digest
- Given a recipient clicks Schedule Make‑up from an alert or digest, then the Visit Bank scheduler opens pre-populated with patient, payer, branch, banked visit details, and a compliant make‑up sequence suggestion derived from payer rules and caregiver availability. - Then on confirmation, make‑up visits are created, the banked/used ledger is updated atomically, and the originating alert is marked Resolved; the action is recorded in the audit trail with the source alert ID. - Then if the action is not permitted by rules (e.g., borrowing not allowed) or slots become unavailable, the user receives a clear error and alternative compliant suggestions; no ledger changes occur on failure. - Then end-to-end from click to scheduled confirmation completes within 10 seconds under normal load (p50) with errors surfaced to the user if SLA is exceeded.
Time Zone, Cutoffs, and DST Handling
- Given a branch time zone is configured, then expiry evaluations and alert/digest scheduling use branch-local time with daily cutoffs at 00:00 branch-local. - Then users see timestamps rendered in their profile time zone with the branch time zone indicated where relevant; links and tokens use UTC for validation. - Then during daylight saving transitions, alerts are neither skipped nor duplicated: evaluations run once per calendar day per branch and apply correct local offsets.
Compliance Reporting & Exports
"As an agency owner, I want comprehensive compliance and financial impact reports for banked visits so that I can demonstrate policy adherence and quantify revenue preservation."
Description

Generate one-click, audit-ready reports per payer and patient showing banked, used, pending, and expired visits, rule versions applied, override counts, and associated documentation artifacts. Include trend and aging views, revenue-at-risk estimates, and filters by date range, branch, payer, and service line. Support CSV/XLSX exports and payer-specific templates, scheduled delivery, and API access for BI tools. Enforce tenant isolation and PHI-safe exports with access controls and watermarking.

Acceptance Criteria
One-Click Audit-Ready Report Generation
Given a user with Compliance Reporting permission within their tenant and a selected payer, patient, and date range, When the user clicks "Generate Report", Then the report displays counts of Banked, Used, Pending, and Expired visits matching the Visit Bank ledger for the same filters. And the report displays rule version identifiers applied to visit groupings and a total count of overrides within the selected period. And the report lists identifiers or links to associated documentation artifacts for relevant visits (e.g., visit note IDs, voice clip IDs, sensor record IDs). And the report header shows payer, patient, branch, service line (if filtered), report period, generation timestamp, and generating user. And totals reconcile such that Banked + Used + Pending + Expired equals total visits in scope.
Filter Accuracy Across Date, Branch, Payer, and Service Line
Given the Compliance Report view, When the user applies any combination of Date Range, Branch, Payer, and Service Line filters, Then the results reflect the logical AND of all selected filters. And the Date Range filter is inclusive of the start and end dates using the agency’s configured time zone. And clearing filters resets the view to defaults with the last 30 days and all branches/payers/service lines selected. And the applied filters are displayed in the report header and are preserved when exporting or scheduling.
Trend, Aging, and Revenue-at-Risk Views
Given a selected date range of at least 14 days, When the user enables Trend & Aging, Then the report shows trend tables/charts of Banked, Used, Pending, and Expired visits aggregated by week or month across the selected period. And an Aging view groups banked/pending visits into buckets (0–7, 8–14, 15–30, 31–60, >60 days to expiry) using payer-specific expiration rules. And a Revenue-at-Risk estimate is displayed for visits in the "expiring within 14 days" and "expired" buckets using configured payer/service-line rates; items without a configured rate are labeled N/A and excluded from the monetary total. And totals in Trend & Aging reconcile with the base report for the same filters.
CSV/XLSX Exports and Payer-Specific Templates
Given a rendered Compliance Report, When the user selects Export and chooses CSV or XLSX with an optional payer template, Then the downloaded file includes the same rows as the on-screen report for the selected filters. And if a payer template is selected, the export matches the template’s column order, headers, required fields, and date/time formats. And CSV exports are UTF-8 encoded with properly quoted fields; XLSX exports preserve data types for dates, numbers, and text. And the filename follows CarePulse_Compliance_{tenant}_{payerOrAll}_{patientOrAll}_{YYYYMMDDHHmm}.{csv|xlsx}. And exports exclude any PHI fields not permitted by the selected template and match the report’s totals and counts.
Scheduled Report Generation and Delivery
Given a user with permission to schedule reports, When they create a schedule specifying report type, filters, export format, optional payer template, frequency (daily/weekly/monthly), execution time, and time zone, Then the schedule is saved and displays the next run time. When the schedule triggers, Then the system generates the report using the latest data and configured filters and stores it in Scheduled Reports history with a downloadable artifact and checksum/hash. And if notifications are enabled for the schedule, a notification record is created referencing the generated report. And on generation failure, the schedule run is marked Failed with an error reason and is visible in history; the failure is logged for audit purposes.
BI API Access for Compliance Reports
Given an authenticated BI client with a valid tenant-scoped token and reporting scope, When it calls GET /api/v1/compliance-reports with supported filters (date range, branch, payer, service line), Then the API returns 200 with a JSON payload equivalent in fields and aggregates to the on-screen report for the same filters. And the API enforces tenant isolation, returning only data for the client’s tenant. And the API supports pagination via cursor or limit/offset and returns pagination metadata (total or next cursor). And the response includes schemaVersion and generationTimestamp; invalid parameters return 400; unauthorized or insufficient scope returns 401/403 respectively.
Tenant Isolation, Access Controls, and PHI-Safe Watermarking
Given a signed-in user, When they attempt to view or export a Compliance Report outside their tenant, Then access is denied and no cross-tenant data is returned. And only users with appropriate permissions can view PHI fields; otherwise, PHI is masked/omitted per policy in UI, exports, and API responses. And all exported files are watermarked with tenant name/ID, generating user, generation timestamp, and a confidentiality notice; for XLSX the watermark appears in header/footer, for CSV a first-row banner is included. And an immutable audit log records report generation/export events including actor, time, filters, output format/template, file hash, and download events.

PlanSync Review

When an RN updates the plan of care, generates a suggested schedule realignment: which visits to move, add, or drop to remain compliant. Users approve in one sweep, and caregivers receive updates automatically. Maintains clinical intent and payer alignment without manual calendar surgery.

Requirements

Real-time Plan Change Detection
"As an RN or operations manager, I want schedule suggestions to auto-generate when I update the plan of care so that visits remain compliant without manual calendar edits."
Description

Continuously monitor RN-entered plan-of-care updates (frequency, visit type, start/end dates, duration, authorization units) and detect material changes in real time. On detection, version the plan, validate required fields, and trigger PlanSync to generate a new schedule proposal. Support updates originating in-app or via integration, apply debouncing to batch rapid edits, enforce role-based permissions, and log all change events. Must handle time zones, effective-dating, and overlapping orders without data loss.

Acceptance Criteria
Real-time Detection and PlanSync Trigger for Material Changes (All Sources)
Given an RN saves a plan-of-care change in-app or an integration posts a valid update affecting material fields [frequency, visit type, start date, end date, visit duration, authorization units], When any material field value differs from the current active version, Then a new plan version is created with a unique versionId and the change is detected within 5 seconds of receipt. Given a new plan version is created, When detection succeeds, Then exactly one PlanSync schedule proposal request is enqueued referencing that versionId within 5 seconds and no duplicate requests are created for the same version. Given a non-material edit occurs, When it is saved, Then no new plan version is created and no PlanSync request is triggered.
Debounce Rapid Sequential Edits into a Single Version
Given multiple material edits occur within a 15-second window for the same patient plan, When the system observes 3 consecutive seconds of inactivity or the 15-second window elapses, Then the system creates a single consolidated plan version reflecting the latest values and triggers exactly one PlanSync request. Given edits continue without a 3-second inactivity gap beyond 15 seconds, When the 15-second maximum window elapses, Then the current accumulated edits are versioned and a new debounce window starts for subsequent edits. Given two users edit concurrently within the debounce window, When conflicting values are submitted, Then last write within the window wins for field values and all individual edit events are recorded with a shared correlationId.
Validate Required Fields and Cross-Field Rules Before Versioning
Given a plan change is submitted, When any required field [frequency, visit type, start date, end date, visit duration, authorization units] is missing or invalid, Then the save is rejected with field-level error messages and no plan version or PlanSync request is created. Given the start date is after the end date or equals the end date while frequency > 0, When submitted, Then the save is rejected with an error indicating invalid date range. Given authorization units are fewer than the units implied by frequency and visit duration across the effective period, When submitted, Then the save is rejected with an error indicating insufficient authorization units. Given all required fields are valid and pass cross-field rules, When submitted, Then a new plan version is created and PlanSync is triggered per detection rules.
Enforce Role-Based Permissions on Plan Updates
Given a user without RN or designated clinical admin permissions attempts to update material plan-of-care fields, When the request is made, Then the action is denied with 403 Forbidden (or equivalent in-app error), the attempt is logged, and no version or PlanSync request is created. Given a user with RN or clinical admin role updates material fields, When the request is made, Then the action proceeds to validation, versioning, and detection flows. Given an integration uses API credentials mapped to insufficient scope, When an update is posted, Then the request is rejected with 401/403 and the event is logged without creating a version or triggering PlanSync.
Honor Patient Time Zone and Effective-Dating, Including DST
Given a patient has a stored IANA time zone, When a change includes effectiveStart and effectiveEnd, Then the system interprets and stores those timestamps in the patient’s local time and also normalizes them to UTC with recorded offset. Given a change is effective at a future local timestamp, When saved, Then a new plan version is created immediately with that effective date, and PlanSync generates schedules that take effect at that exact local timestamp. Given a change occurs on a Daylight Saving Time transition, When saved, Then the effective window honors local wall-clock time without unintended shifting due to offset changes, and no duplicate or missing service day is recorded.
Handle Overlapping Orders Without Data Loss
Given a new order’s effective window overlaps an existing active order, When the change is saved, Then both orders are retained with distinct effective date ranges, and a new plan version references both orderIds with their applicability windows. Given overlapping orders are present, When PlanSync is triggered, Then the version payload includes both orders’ material constraints and no field values are overwritten or dropped from either order. Given an overlap is later resolved by removing or shortening one order, When the update is saved, Then a new version reflects the revised windows while all prior versions remain immutable and retrievable.
Comprehensive Change Event Logging and Traceability
Given any material change event is processed, When it is logged, Then the record includes UTC and patient-local timestamps, actor identity (userId or integrationId), source channel, patientId, planId, orderId(s), previous and new values for material fields, versionId, PlanSync requestId (if any), and a debounce correlationId. Given a plan version is created, When an auditor queries by patientId and time range, Then the system returns the ordered list of change events and versions within 1 minute with no gaps or duplicates for at least the last 365 days. Given a request to export a single patient’s plan change history is made, When processed, Then a machine-readable file containing the required log fields is produced and excludes PHI not needed for audit.
Compliance Rules Engine
"As an operations manager, I want PlanSync to enforce payer and regulatory rules so that suggested schedules are compliant and defensible."
Description

Provide a configurable, versioned rules engine encoding payer, program, and state regulations: visit frequency and spacing, eligible disciplines, time-of-day windows, blackout days, authorization unit limits, documentation prerequisites, and escalation thresholds. Evaluate current and proposed schedules for compliance, produce human-readable rationales for each suggested add/move/drop, and flag irreconcilable conflicts with recommended exceptions. Support rule effective dates, payer-specific overrides, and per-patient custom constraints.

Acceptance Criteria
Evaluate Compliance with Hierarchical Rules and Patient Overrides
Given a patient assigned to payer P, program G, and state S with a proposed weekly schedule of visits across disciplines And rules exist for state S and program G and payer-specific overrides for payer P And per-patient custom constraints are defined (e.g., no weekend visits) When the engine evaluates the schedule for the coverage period Then it applies rule precedence: patient constraints > payer overrides > program > state baseline And it enforces visit frequency and spacing constraints per discipline And it enforces eligible disciplines, time-of-day windows, and blackout days And it returns a compliance status per visit and for the schedule overall (Compliant/Non-Compliant) And it identifies each violation with rule.id, rule.version, source (payer/program/state/patient), severity, and affected visit IDs
Effective Dating and Rule Version Selection Across Date Range
Given rules with versions v1 effective until 2025-09-30 and v2 effective from 2025-10-01 for payer P And a schedule spanning 2025-09-25 to 2025-10-05 When the engine evaluates the schedule Then visits dated on or before 2025-09-30 use v1 and visits dated on or after 2025-10-01 use v2 And the output includes rule.version and rule.effectiveDateRange per violation/suggestion And no rule is applied outside its effective date range And the decision is deterministic: repeated evaluations with the same inputs produce identical outputs
Authorization Unit Limits Enforcement and Suggestions
Given an authorization for 16 RN units and 8 PT units for period 2025-09-01 to 2025-09-30 with unit definition = 1 unit per 30 minutes And a proposed schedule that would consume 18 RN units and 8 PT units When the engine evaluates the schedule Then it flags RN overage as Non-Compliant and PT as Compliant And it suggests drops or duration reductions to meet RN unit limits with at least one minimal-change option And it provides a rationale with computed unit usage before/after and affected visit IDs And if no compliant arrangement exists within the period, it flags an irreconcilable conflict with recommended exception type and justification text stub
Documentation Prerequisites and Escalation Thresholds
Given a rule that requires a signed Plan of Care before any skilled nursing visit And the patient record lacks the signed Plan of Care And an escalation threshold of 48 hours before the first scheduled visit When the engine evaluates the schedule 72 hours before the first SN visit Then it marks the visits as Blocked pending prerequisite and the schedule as Non-Compliant And it emits an escalation event with severity=High and dueIn=24h And when the prerequisite is attached, a subsequent evaluation clears the block without manual overrides
Human-Readable Rationales for Add/Move/Drop Suggestions
Given a schedule that violates a 2-visit-per-week skilled nursing frequency rule When the engine generates a suggested realignment Then each add/move/drop suggestion includes: plain-language reason, rule citation/ref, affected visits, before/after schedule snapshot IDs, and expected compliance outcome And reasons are readable at grade 8–10 and under 280 characters each And at least one suggestion preserves clinical intent tags (e.g., wound care M/W/F) where possible
Performance, Scale, and API Contract
Given a patient with up to 60 visits in a 30-day window and 200 applicable rules And the engine is invoked via API with requestId and idempotencyKey When the evaluation is executed under normal load Then 95th percentile evaluation latency is <= 300 ms and 99th percentile <= 600 ms measured at the service boundary And responses are idempotent within 24 hours for the same idempotencyKey And the response schema includes: scheduleComplianceSummary, visitFindings[], suggestions[], and exceptions[] with stable field names and types And the engine handles at least 50 concurrent evaluations without error
Smart Realignment Optimizer
"As a scheduler, I want optimal realignment suggestions that minimize disruption and maintain continuity so that patients see familiar caregivers and we stay within budget."
Description

Generate a ranked set of schedule adjustments (adds/moves/drops) that preserve clinical intent while minimizing disruption. Optimize against caregiver availability and skills, continuity-of-care preferences, live route travel times, overtime/union constraints, patient time windows, and authorization budgets. Respect existing appointments where possible, avoid double-booking, and compute proposals within 5 seconds for a typical caseload. Provide scoring explanations and allow configurable optimization weights per agency.

Acceptance Criteria
Compute Optimized Proposals Within 5 Seconds
Given a typical caseload of up to 150 patients, 60 caregivers, and 500 scheduled visits over the next 7 days And a plan-of-care update affecting at least one visit series When the optimizer is executed Then the first ranked proposal is returned within 5 seconds for at least 95% of runs and within 8 seconds for at least 99% of runs And the API response includes computation_time_ms for auditing And no request exceeds a 15-second timeout
Enforce Hard Constraints and Zero Conflicts
Given caregiver shift calendars, existing appointments, required skills/certifications, union/overtime rules, patient time windows, and payer authorization budgets When a proposal is generated Then zero double-bookings exist for any caregiver or patient And 100% of scheduled visits meet required skill/certification constraints And 100% of scheduled visits fall within the patient’s allowed time windows And union/overtime hard rules are not violated And authorization usage per patient/payer does not exceed remaining budgeted units/hours/dollars
Minimize Disruption and Preserve Clinical Intent
Given a baseline schedule and a plan-of-care change with a known minimal-change oracle solution in the test fixture When proposals are generated Then the top-ranked proposal matches the oracle’s minimal number of changed visits (adds+moves+drops) or is within 1 change if multiple optima exist And continuity-of-care is preserved by keeping the same caregiver where clinically permitted in at least 90% of unchanged visit series And the proposal includes a diff summary enumerating counts of added, moved, and dropped visits
Incorporate Live Route Travel Times in Optimization
Given live routing data is available from the routing provider at computation time When a proposal is generated Then each caregiver’s daily route includes travel_time_minutes and total_distance And predicted segment travel times are within ±10% of the routing provider’s values for the same timestamp and coordinates And proposals that cause shift-end violations due to travel are excluded
Ranked Proposals With Transparent Scoring Explanations
Given optimization weights are configured for disruption, continuity, travel, overtime cost, and compliance risk When proposals are generated Then at least 3 feasible proposals are returned ranked by total_score (or all feasible if fewer than 3 exist) And each proposal includes total_score, component_scores, and a per-visit rationale referencing the constraints/weights that influenced each change And selecting a proposal reveals an explanation that traces how component_scores sum to total_score
Agency-Configurable Optimization Weights
Given an agency admin updates optimization weights via settings When the weights are saved Then the new weights persist per agency and environment with a stored version, timestamp, and user id And a subsequent optimization run reflects the updated weights, producing different scores/ranking on a known sensitivity dataset (Kendall tau distance > 0) And invalid weight values outside 0–1 are rejected with validation errors and no changes are saved
Bulk Approval & Override Console
"As a scheduler, I want to approve or tweak all proposed changes in one place so that I can finalize the schedule quickly and safely."
Description

Deliver a mobile-first review screen showing a clear diff between current and proposed schedules, with grouped changes by patient and caregiver. Enable one-click approve-all, selective accept/reject, inline edits, and conflict resolution prompts. Display compliance impacts, cost/authorization deltas, and travel implications before commit. Require RN sign-off for clinical-impacting changes, support undo and draft states, and write-back approvals atomically with optimistic concurrency.

Acceptance Criteria
Grouped Diff View by Patient and Caregiver
Given an RN opens the Bulk Approval & Override Console for a plan update with proposed changes Then the screen displays a side-by-side diff of current versus proposed schedules And changes are grouped first by patient, then by caregiver within each patient group And each change is labeled as Move, Add, or Drop And each group shows a change count and supports expand/collapse And each change row shows visit date/time, duration, caregiver, payer, authorization ID (if any), and clinical tag(s) And the console renders the initial diff within 2 seconds for up to 500 proposed changes And on a 360x640 mobile viewport, primary fields are legible without horizontal scrolling
Approve-All Commit with Optimistic Concurrency
Given there are proposed changes with no unresolved conflicts When the RN taps Approve All and confirms Then the system writes all accepted changes atomically to the live schedules And optimistic concurrency checks (ETag/version) are validated per patient schedule before write And if any version conflict is detected, no partial writes occur and a Refresh Required banner lists affected patients And on success, assigned caregivers to affected visits receive update notifications within 60 seconds And an audit record is stored with approver, timestamp, counts of adds/moves/drops, and version identifiers
Selective Accept/Reject and Inline Edit with Live Impact Calculations
Given a list of proposed changes is displayed When the RN accepts or rejects individual changes Then the selection state is persisted in a draft for the session And when the RN edits a proposed visit's start time, duration, or caregiver within allowed constraints Then compliance impact, authorization units delta, and travel time change are recalculated and displayed within 500 ms And edited items are visibly marked as Edited by RN And Approve Selected commits only accepted items; rejected items are discarded from the proposal
Conflict Detection and Resolution Prompts
Given a proposed or edited change would create a conflict (double-booking, overtime, travel gap violation, or authorization overage) When the RN attempts to accept the change Then a conflict prompt appears detailing the issue and impacted patient/caregiver And the prompt offers resolution options (e.g., shift by X minutes, reassign caregiver, split visit, request extra units) And the commit of the conflicting item is blocked until resolved or rejected And upon choosing a resolution, the diff updates and impact metrics refresh within 500 ms
Compliance and Payer Impact Surface Before Commit
Given at least one change has a compliance or payer impact When the RN opens the change details sheet Then the console displays compliance status (Pass/At Risk/Fail), authorization units delta (+/-), and projected cost change ($) And each impact includes a rule source or policy reference identifier And changes with Fail status cannot be committed without override And a summary banner shows cumulative impacts for the selected scope (patient or all) prior to commit
RN Sign-off for Clinical-Impacting Changes
Given the commit includes clinical-impacting changes (e.g., visit type, frequency plan, goals alignment) When the RN proceeds to commit Then the system requires RN authentication (biometric or password re-entry) and e-signature attestation per legal format And the commit is blocked until sign-off completes successfully And the audit log captures credential used, timestamp, and attestation text version And if authentication fails 3 times, the commit is canceled and the draft remains intact
Draft Persistence, Undo, and Caregiver Notifications
Given the RN makes selections or edits When the RN saves as Draft or navigates away Then the draft state is persisted and can be reopened within 3 seconds from the review backlog And no caregiver notifications are sent while in Draft When a commit succeeds Then an Undo option is available for 10 minutes to revert the commit atomically And if Undo is used, previous schedules are restored, caregivers receive reversion notifications within 60 seconds, and an audit entry is recorded
Caregiver Update Delivery & Acknowledgment
"As a caregiver, I want timely, clear notifications of schedule changes with an easy way to acknowledge them so that I can adjust my day and confirm receipt."
Description

Automatically deliver approved changes to caregivers via in-app push with SMS/email fallback, including updated routes, visit notes highlights, and required documentation changes. Require acknowledgment within a configurable SLA, with reminders and escalations for non-response. Handle offline mode with queued sync, de-duplicate notifications across channels, and update external calendars when linked. Capture acknowledgments and reasons for declines, and notify schedulers of unfillable assignments.

Acceptance Criteria
Approved Update Delivery with Channel Fallback
Given a plan-of-care realignment has been approved for caregiver C And caregiver C has a valid in-app session token, verified SMS number, and verified email When the delivery job is triggered Then an in-app push notification is sent to C within 30 seconds containing updated routes, visit note highlights, and required documentation changes And the payload includes a single immutable notification_id unique to the update batch And if push is not confirmed delivered within 2 minutes, an SMS with a secure deep link carrying the same notification_id is sent And if SMS is undeliverable or fails, an email carrying the same notification_id is sent within 1 minute of SMS failure And only one actionable notification is presented across all channels for a given notification_id And all delivery attempts and outcomes are logged with timestamp, channel, and result in the audit trail And at least 99% of updates reach at least one channel within 5 minutes under normal operating conditions
Acknowledgment Within SLA, Reminders, and Escalation
Given org-level settings SLA_minutes=60, reminder_interval_minutes=15, max_reminders=3, escalation_target=scheduler S And caregiver C has received update notification_id N When C opens the notification Then C must choose Acknowledge or Decline before proceeding to updated schedule And upon Acknowledge, the system records user_id, notification_id, device_id, timestamp, and geo-coordinates (if enabled) And if no response within 60 minutes, a reminder is sent every 15 minutes up to 3 times across the best-available channel And after the third unanswered reminder, an escalation notice is sent to scheduler S and the assignment is marked "Awaiting Action - Overdue" And if C acknowledges after escalation, the system marks the acknowledgment as "Late" and notifies S of resolution And the acknowledgment rate and median time-to-ack are available in reporting for the date range including N
Offline Mode Queue and Sync
Given caregiver C is offline when notification_id N is generated When the device receives N while offline Then N is stored locally with full payload and marked Pending Delivery And the app displays a non-blocking "Updates pending" badge within 5 seconds of app open And if C taps Acknowledge or Decline while offline, the action is stored locally with a signed timestamp and reason (if Decline) And upon next connectivity, the device syncs N and any stored actions within 10 seconds And the server processes actions idempotently by notification_id and device_action_id, preventing duplicate state changes And no duplicate in-app banners or push/SMS/email are presented after successful sync And audit logs include offline receipt, local action, and server confirmation timestamps
Cross-Channel De-duplication and Single-Action Enforcement
Given notification_id N has been issued across multiple channels When caregiver C acts on N via any channel (deep link, in-app banner) Then all other channel instances of N become non-actionable and display "Already handled" if opened And in-app banners for N are auto-dismissed on next app foreground within 2 seconds And reminder jobs exclude N once an action has been recorded And any late-arriving duplicate channel messages for N do not create additional reminders or actions And analytics count exactly one action per notification_id
External Calendar Synchronization for Linked Caregivers
Given caregiver C has a linked external calendar with write access and timezone TZ set And notification_id N changes C’s schedule (create, update, cancel) When N is acknowledged or reaches the "auto-apply" state per org policy Then corresponding calendar events are created/updated/cancelled within 2 minutes in TZ And events include title, start/end, location, patient-safe identifier, and deep link back to CarePulse; no visit note text is included And updates are matched by stable event UID derived from assignment_id to prevent duplicates And canceled visits send iCalendar CANCEL updates so removed events disappear from the calendar And failure to update the calendar due to auth/scope errors generates an in-app alert to C within 30 seconds and logs the error for admin review
Decline Reasons Capture and Unfillable Assignment Notification
Given caregiver C selects Decline for notification_id N When prompted for a reason Then C must select a reason from the configured list or enter free text of at least 10 characters And the system records reason_code, free_text (if provided), timestamp, and notification_id And if the decline leads to an unfillable assignment per org rules (skills, distance, availability), the scheduler S is notified within 2 minutes with patient, time window, reason, and suggested next-best candidates And the assignment status updates to "Unfilled" and appears in the scheduler’s queue with filters and sorting by urgency And reporting includes decline rate, top reasons, and time-to-reassignment KPIs for the date range including N
Audit Trail & Compliance Reporting
"As a compliance officer, I want a complete, exportable trace of schedule changes and their rationales so that I can satisfy payer and regulatory audits."
Description

Capture immutable, time-stamped records of plan updates, rule evaluations, optimizer outputs, user approvals/overrides, and caregiver notifications/acknowledgments. Provide one-click, audit-ready reports linking the final schedule back to plan-of-care intent, payer rules applied, and justifications for exceptions. Support PDF/CSV export, patient-level and agency-level filters, and retention policies aligned with regulatory requirements.

Acceptance Criteria
Immutable Audit Trail for Plan Updates
Given an RN modifies a Plan of Care and saves changes When the update is committed Then an immutable audit record is created with fields: patient_id, plan_id, plan_version, change_type, author_user_id, author_role, timestamp_utc (ISO 8601), device_id, ip_address, reason_code, change_summary, before_snapshot_id, after_snapshot_id, correlation_id, record_hash, previous_hash And the audit record is persisted within 500 ms of commit And the record is visible in the Audit Trail view within 5 seconds And attempts to edit or delete the audit record via API or UI result in HTTP 403 and are themselves logged as security events with timestamp and actor And recomputing record_hash server-side over the stored payload reproduces the stored value (tamper-evident)
Trace Rule Evaluations and Optimizer Outputs
Given PlanSync recalculation runs for a patient When payer rules are evaluated Then each rule evaluation is logged with rule_id, rule_version, input_params, data_sources, outcome (pass|fail), evaluation_timestamp_utc, correlation_id And when the optimizer produces suggested changes Then the optimizer run is logged with objective, constraints_count, input_visit_count, suggested_add_count, suggested_move_count, suggested_drop_count, seed_id, run_timestamp_utc, correlation_id, quality_score And all rule and optimizer logs share the same correlation_id as the originating plan update And all required fields are non-null for 100% of records in a test run of at least 20 recalculations And records are persisted within 2 seconds of optimizer completion
Capture User Approvals and Overrides
Given a user reviews a suggested realignment When they approve all suggested changes Then a single approval record is stored with approver_user_id, approver_role, timestamp_utc, change_set_id, approval_scope="all", correlation_id, record_hash Given a user overrides any suggested change When they submit the override Then justification_text (min 20 chars) and reason_code are required and stored, along with before_after_diff JSON at field level And override submissions without required justification are rejected with a validation error and are not recorded as approvals And all approvals and overrides link to the optimizer run via correlation_id and are visible in the Audit Trail within 5 seconds
Caregiver Notification and Acknowledgment Logging
Given changes are published from PlanSync When notifications are sent to assigned caregivers Then one notification record per caregiver is stored with caregiver_id, channels[], sent_timestamp_utc, delivery_status, provider_message_id, correlation_id And when a caregiver views the update in the mobile app Then an acknowledgment record is stored with acknowledged=true, ack_timestamp_utc, device_id, app_version, correlation_id And failed deliveries and retries are logged with attempt_count and retry_timestamps[] And if acknowledgment is not received within the agency-defined SLA hours (default 24), a missed_ack event is recorded with timestamp_utc and escalation_target
One-Click Audit-Ready Report Generation
Given a user with Reports.View permission is on a patient or agency context When they click "Audit Report" Then a PDF and a CSV are generated within 10 seconds containing: final schedule, plan-of-care intent (version, timestamp, author), payer rules applied (ids, versions, outcomes), user approvals/overrides with justifications, caregiver notifications/acks with timestamps, and correlation_ids linking all records And the PDF includes continuous page numbers, generated_timestamp_utc, and a SHA-256 checksum printed in the footer And the CSV columns match the published data dictionary v1.0 exactly And report contents match underlying audit records 1:1 by id in a verification test And users without Reports.View receive HTTP 403 and no file is generated
Filters and Retention Policy Compliance
Given an agency admin configures an audit retention policy When retention is set to a value between 1 and 10 years per payer or agency default Then the system enforces immutability until expiry and schedules purge jobs after expiry And a nightly job deletes expired audit records and emits a purge receipt containing total_deleted, entity_breakdown, time_window, and a root_hash of deleted record ids And audit report filters support patient_id, caregiver_id, payer_id, date_range, rule_id, and outcome, returning results in under 3 seconds for the 95th percentile of queries on datasets up to 1,000,000 records And exports respect all active filters and include retention_policy_name and retention_expiry_date in headers And users cannot view or export records beyond the configured retention window
What-If Simulation & Impact Preview
"As an RN, I want to preview the impact of plan changes before applying them so that I can choose the safest, most compliant option."
Description

Allow users to stage plan-of-care edits in a sandbox and preview projected compliance, staffing feasibility, cost/authorization consumption, and travel impact before committing. Compare scenarios side-by-side, highlight risk areas (e.g., unfillable visits), and save drafts for later approval. Simulations must not notify caregivers until an approval is executed.

Acceptance Criteria
Sandbox Creation and Isolation from Live Schedule
Given an RN user with edit permissions opens a patient’s plan of care When the user creates a new simulation and edits visit frequency, timing, or caregiver assignments Then the live schedule and plan of care remain unchanged And the simulation is saved as a draft with a unique ID, author, timestamp, and version And no caregiver, patient, or external system receives notifications or updates And the draft is visible only to users with PlanSync Review permission on that patient
Compliance Projection for Simulated Plan of Care
Given a simulation draft with modified visits When the user requests a compliance preview Then the system returns a compliance status per payer rule set within 2 seconds for scenarios with <= 100 visits And each unmet rule is listed with rule name, affected visits, and remediation suggestion And calculated compliance metrics match the production compliance engine for identical inputs (tolerance 0 discrepancies)
Staffing Feasibility and Unfillable Visit Detection
Given a simulation draft When staffing feasibility is evaluated Then each simulated visit is marked fillable or unfillable based on skills, credentials, availability, travel radius, shift overlaps, and max-hours rules And unfillable visits are flagged with at least one concrete reason (e.g., credential mismatch, no availability window) And a scenario-level fill rate percentage is displayed And conflicts with existing caregiver assignments are identified with caregiver name and conflicting time window
Authorization Utilization and Cost Impact Preview
Given a simulation draft with payer authorizations loaded When cost and authorization impact is calculated Then projected authorization units consumed and remaining are shown per payer/episode And total labor cost, mileage cost, and per-visit cost deltas versus live are displayed And any authorization overage or approaching threshold (>= 90% used) is highlighted as a warning And rounding and unit conversions follow payer configuration rules
Travel Time and Distance Impact Calculation
Given a simulation draft that changes visit sequencing or caregiver assignments When travel impact is calculated using the configured routing provider Then per-caregiver daily added/removed travel time (minutes) and distance (miles/km) are shown And routing honors configured travel mode and caregiver start/end locations And calculations for scenarios with <= 100 visits complete within 3 seconds And the travel deltas versus live are displayed at visit, caregiver, and scenario levels
Side-by-Side Scenario Comparison and Risk Highlighting
Given at least two saved simulation drafts or one draft and the live plan When the user opens the comparison view Then up to three scenarios can be compared side-by-side And differences in visits added, moved, and dropped are itemized And compliance score, fill rate, cost, and travel metrics are shown with deltas versus live And risk items (unfillable visits, authorization overages, missed compliance windows) are summarized with counts and deep links to details
Approval, Apply Changes to Live, and Notification Guardrails
Given a selected simulation draft When the user approves and applies the scenario Then the live schedule and plan of care are updated atomically to match the simulation And caregivers and subscribed stakeholders receive notifications according to notification settings And an audit log records approver, timestamp, change summary, and previous plan reference And prior to approval, no schedule changes or notifications are sent

Product Ideas

Innovative concepts that could enhance this product's value proposition.

Route Drift Guard

Track live GPS against planned routes; flag drift and auto-suggest reroutes or swaps to keep visits on-time. Cuts late arrivals with five‑minute early warnings.

Idea

ShiftLock Access

Time-bound, role-scoped permissions that auto-expire at shift end, with one-tap just-in-time elevation and full audit trail. Protects PHI during after-hours triage.

Idea

Coach Nudge Cards

Bite-sized, in-flow tips triggered by common documentation misses; 30‑second micro-lessons with one-tap retry. Lifts note completeness without training sessions.

Idea

Payor-Perfect Exports

One-click exports shaped to each payer portal’s quirks—codes, units, EVV stamps—using saved presets. Slashes denial risk for mixed-payer agencies.

Idea

Family Calm Feed

Secure, read-only updates translating clinical notes into plain language with optional daily SMS summary links. Reduces phone tag and boosts family trust.

Idea

Sensor Heartbeat Watch

Monitors IoT pairings and data freshness; sends self-heal prompts and auto-rebinds devices when signal drops. Prevents missing vitals in visit notes.

Idea

CarePlan Guardrails

Enforces plan-of-care visit frequencies in scheduling; warns on under/over visits and suggests compliant slots. Prevents costly authorization breaks.

Idea

Press Coverage

Imagined press coverage for this groundbreaking product concept.

P

CarePulse Debuts PayerFit Scheduling Suite to Stop Authorization Breaks Before They Happen

Imagined Press Article

Austin, TX — September 5, 2025 — CarePulse, the mobile‑first operations and documentation platform built for small and midsize home‑health agencies, today announced the PayerFit Scheduling Suite, a real‑time rules and routing layer that prevents authorization breaks at the moment of scheduling. By encoding payer‑specific visit frequencies, spacing requirements, grace windows, and carryover rules into an intuitive workflow, the suite gives schedulers and supervising nurses instant clarity about what can be booked, when, and by whom—before a single visit goes out of compliance. Home‑health scheduling has grown more complex as payer plans diverge and authorizations tighten. Under‑frequency, back‑to‑back spacing, and out‑of‑window visits are among the most common causes of denials and rework. PayerFit replaces guesswork with live validation and simple explanations, while coordinating with live routes so visits land where they’re both compliant and practical. The PayerFit Scheduling Suite includes: • PayerFit Engine: A continuously updated rules brain that validates visit counts, frequencies, and spacing by payer and discipline in real time, explaining any blocks in plain language. • Quota Meter: Always‑on counters that show used versus remaining visits per authorization window, with color cues and “days left” tickers that update as users drag and drop. • Slot Assist: One‑tap recommendations for the next compliant timeslot, balancing plan‑of‑care rules, caregiver credentials, client preferences, and live route realities. • Grace Guard: Built‑in enforcement for payer‑specific grace windows and exceptions with reasons that export cleanly for audits. • Visit Bank: A ledger that tracks carryover eligibility and suggests compliant make‑up patterns when allowed. • PlanSync Review: A smart realignment step that proposes schedule updates when an RN modifies a plan of care, keeping clinical intent and payer alignment in lockstep. “Agencies shouldn’t need a PhD in every payer’s rules to build a safe, compliant day,” said Maya Chen, CEO of CarePulse. “PayerFit gives Route Orchestrators and RN Case Planners a single source of truth that reacts as fast as they do—so the first schedule is the right schedule, denials are prevented, and caregivers arrive on time.” Early adopters report material gains. Across a three‑month pilot involving mixed‑payer agencies, partners saw a 27% reduction in out‑of‑window visits, a 22% cut in last‑minute reschedules due to rule conflicts, and a measurable lift in clean‑claim rates tied to better spacing and authorization adherence. Coordinators describe the Slot Assist recommendations as “confidence builders” that shorten booking times while respecting caregiver availability and route flow. “As a supervising RN, I used to spend hours checking whether a make‑up visit would break spacing or cap rules,” said Linda Romero, RN Case Planner at a midwestern home‑health agency. “Now, the Quota Meter and Grace Guard tell me instantly what’s safe. I can focus on patient needs and let the system handle the math.” For dispatch and operations teams juggling live changes, PayerFit works hand‑in‑hand with CarePulse’s route intelligence. When traffic, weather, or caregiver availability shift, Slot Assist surfaces the top three compliant options with a short rationale (meets 2x/week spacing, within auth window, minimal detour). The result is faster decisions and fewer phone calls, without slipping outside payer policy. The suite also tightens compliance handoffs. PlanSync Review converts an RN’s plan‑of‑care update into actionable scheduling moves—what to add, drop, or shift—so teams can approve in one sweep. Every change is logged with reasons and timestamps, feeding clean, audit‑ready exports that help Compliance and Billing avoid end‑of‑month scrambles. “Compliance shouldn’t be an after‑the‑fact clean‑up,” said Brianna Lewis, a Compliance Sentinel at a Texas agency. “With PayerFit, risk is flagged in the moment work gets scheduled. It’s proactive instead of punitive, and it shows our teams the why behind each rule.” Availability and onboarding: The PayerFit Scheduling Suite is available today to all CarePulse customers in North America, with expansion to select international payers underway through RulePulse Updates. Agencies can activate the suite without IT tickets; Role Blueprints help leaders assign capabilities by shift and job function. New customers can request a 30‑day guided trial that includes baseline analysis of authorization usage and drift patterns, and live training for dispatchers and RN reviewers. Pricing and packaging: PayerFit Engine, Quota Meter, and Slot Assist are included in the CarePulse Scheduling tier. Grace Guard, Visit Bank, and PlanSync Review are available in the Compliance tier or as an add‑on bundle for qualified customers. Volume pricing is offered for multi‑site agencies. CarePulse is the lightweight, mobile‑first SaaS that centralizes scheduling, documentation, and compliance for home‑health operations managers and caregivers. The platform syncs live routes, auto‑populates notes from short voice clips and optional IoT sensors, and produces one‑click, audit‑ready reports—cutting documentation time in half while improving on‑time, compliant visits. Media contact Elena Park, Head of Communications CarePulse press@carepulse.io +1 415 555 0135 www.carepulse.io

P

CarePulse Launches Family Calm Pack: Plain-Language Updates and Secure Sharing Without an App

Imagined Press Article

Austin, TX — September 5, 2025 — CarePulse, the mobile‑first platform for home‑health scheduling, documentation, and compliance, today introduced the Family Calm Pack, a privacy‑first communications layer that turns visit notes and vitals into clear, compassionate updates families can actually use—without asking anyone to download an app. By combining auto‑summarization with consent controls and one‑time secure links, the Family Calm Pack reduces phone tag, builds trust, and keeps agencies firmly in compliance. Family communication is always urgent and often messy. Clinicians document in professional terms; families need plain language, the right context, and options that fit their day. Meanwhile, agencies must maintain strict privacy controls and audit trails. The Family Calm Pack meets all three needs at once. The Family Calm Pack includes: • PlainSpeak Digest: Converts clinical notes into short updates in lay terms with an easy status badge—Stable, Improving, Needs Attention—highlighting what was done, what changed, and what to watch. • Consent Circles: Simple, agency‑defined share groups (Immediate Family, Care Proxy, Financial Contact) with field‑level visibility and time‑boxed access, plus an approval trail for compliance. • SafeLink OTP: Read‑only daily summary links delivered by SMS or email that are one‑tap, one‑time, and time‑bound, protected by PIN/OTP and view receipts. • Language Lens: Automatic translation into each recipient’s preferred language and reading level, with nurse‑approved phrasing and a mini glossary to de‑jargonize terms. • Next‑Step Timeline: A forward‑looking timeline that shows upcoming visits, care goals, and “how you can help” tips, setting expectations and reducing day‑of surprises. • Calm Controls: Per‑contact notification settings that respect quiet hours and let families choose cadence and topics, preventing alert fatigue while keeping critical items urgent. “Families deserve clarity without compromising privacy,” said Maya Chen, CEO of CarePulse. “The Family Calm Pack moves agencies beyond ad hoc phone calls and screenshots. It delivers secure, human updates that keep loved ones informed and clinicians focused on care.” Early users report fewer inbound calls and more prepared household caregivers. One agency saw a 35% drop in non‑urgent after‑hours calls within two weeks, and another reported higher satisfaction scores tied to the Next‑Step Timeline setting expectations for therapy sessions and medication pickups. Because SafeLink OTP requires no app install, adoption was immediate, even among older relatives. For client relations teams, PlainSpeak Digest cuts the translation burden by summarizing in seconds and filtering out non‑shareable data. “I used to spend evenings rewriting notes into something families could absorb,” said Felix Ortega, Client Relations Coordinator at a southeastern agency. “Now the Digest gives me a safe starting point. I can add a line or two of context and send a secure link in under a minute.” Compliance is embedded from invite to view. Consent Circles ensure only the right people see the right details, with verified invites, expirations, and revocations that take seconds—not tickets. Every SafeLink view is logged, and any redacted field remains masked no matter how a link is retrieved. For agencies with stricter policies, links can be limited to in‑country numbers, weekdays, or clinic hours, and Role Blueprints let leaders decide who can send which updates under which conditions. The Family Calm Pack is designed to work in the flow of care. Caregivers can trigger PlainSpeak Digests automatically at visit completion; RN Case Planners can attach a brief guidance note for trends; and Family Touchpoint teams can schedule weekly roundups for those who prefer a digest cadence. Language Lens supports dozens of languages, with tone‑aware phrasing that avoids alarming words when describing routine changes. The Pack ties into the broader CarePulse platform so agencies don’t have to copy and paste between systems. Updates pull from the same voice‑driven documentation and IoT readings that feed audit‑ready reports. SafeLink OTP piggybacks on the platform’s security model, with Step‑Up MFA for staff when risk rises and an Access Ledger that shows who shared what, when, and with whom—crucial for payor reviews and family disputes alike. “Clear communication is half of clinical quality,” said Talia Nguyen, a training lead at a New England agency. “Calm Controls let us match each family’s preference so we’re helpful, not noisy. And when something truly needs attention, it cuts through.” Availability and onboarding: The Family Calm Pack is available today for all CarePulse customers in North America and select international markets. Agencies can start with a guided configuration of Consent Circles and a short policy alignment workshop to ensure share settings mirror agency rules. Templates for common updates—therapy progress, medication changes, vitals trends—are included. Pricing and packaging: PlainSpeak Digest and Calm Controls are included in the Core platform. Consent Circles, SafeLink OTP, Language Lens, and Next‑Step Timeline are available in the Family Engagement add‑on. Volume pricing is available for agencies serving multi‑state populations. CarePulse is a lightweight, mobile‑first SaaS that centralizes scheduling, documentation, and compliance for home‑health operations managers and caregivers. It syncs live routes, auto‑populates visit notes from short voice clips and optional IoT sensors, and delivers one‑click, audit‑ready reports to halve documentation time and ensure on‑time, compliant visits. Media contact Elena Park, Head of Communications CarePulse press@carepulse.io +1 415 555 0135 www.carepulse.io

P

CarePulse Unveils IoT Reliability Guardrails to Keep Vitals Streaming and Notes Audit-Ready

Imagined Press Article

Austin, TX — September 5, 2025 — CarePulse, the mobile‑first platform trusted by home‑health agencies to run schedules, documentation, and compliance, today announced IoT Reliability Guardrails, a coordinated set of device‑aware features that keep sensors connected, vitals flowing, and visit notes complete—even when connectivity or batteries falter. The release helps agencies standardize IoT at the point of care without adding IT tickets, while preserving a clean audit trail across every reading and recovery. IoT can elevate outcomes, but only when the data shows up at the chart reliably. In the real world, caregivers face dead zones, low batteries, and pairings that slip. The result is rework, missed documentation, and denials driven by gaps. IoT Reliability Guardrails turn these risks into routine, self‑healing events. The release includes: • Auto Rebind: Invisible background reconnection that rotates through known channels and cached keys to recover sensor streams. If a device truly drops, a single‑tap rebind with prefilled client pairing gets readings flowing again in seconds. • Battery Scout: Predictive battery alerts that flag high‑risk devices days in advance, add reminders to caregiver prep, and suggest in‑route swaps. • Tap Test: A 15‑second pre‑visit diagnostic that verifies pairing, signal strength, time sync, and a sample value, clearing green when ready or guiding a self‑heal when not. • Signal Timeline: A per‑client visualization of heartbeats, gaps, RSSI strength, and firmware version, with placement tips for hubs or extenders so teams fix root causes—not just symptoms. • Hot Swap: Guided replacement that transfers pairing, calibration, and documentation to a backup sensor in under a minute, auto‑updating visit notes and chain‑of‑custody logs. • PairLock: Client‑locked pairing enforced with QR codes and geofenced checks to prevent cross‑patient binds, with quarantines for unknown devices and clear unpair workflows. • Offline Buffer: Local caching for readings during connectivity hiccups, with exact timestamps and provenance backfilled to the chart once online. • Sensor Aware: Real‑time nudges when expected readings are missing or stale, prompting a re‑pair, a reading capture, or a logged reason that flows into the audit trail. “Agencies shouldn’t need a network engineer on every shift to roll out sensors,” said Darius Patel, VP of Product at CarePulse. “With Reliability Guardrails, a caregiver can tap test, bind, and swap a device in less than a minute. Data keeps flowing and notes stay defensible.” Operational benefits show up fast for both field teams and coordinators. Tap Test reduces room‑entry troubleshooting and gives caregivers confidence that the first vitals reading will land. Signal Timeline arms IoT Integrators with the visibility to spot chronic dead zones and plan fixes that stick. Meanwhile, Auto Rebind and Offline Buffer protect EVV alignment and clinical completeness in basements, elevators, and rural routes. “The difference is night and day,” said Theo Wilson, a therapy lead at a multi‑disciplinary home‑health organization. “Before, we’d lose a therapy session trying to sort out a sensor. Now we get a green check up front, and if anything wobbles, the app heals it while we keep working with the client.” Compliance is first‑class throughout. PairLock prevents cross‑patient binds and captures reason codes for any unpair. Sensor Aware logs when a caregiver chose a safe default or recorded a reason for a missing reading, and Access Ledger tracks who viewed or adjusted device settings. For audits, a single export assembles the readings, the recovery steps, timestamps, and the final note—turning troubleshooting into evidence. Reliability Guardrails dovetail with CarePulse’s broader scheduling and documentation workflows. Battery Scout feeds Route Orchestrators advance warnings so spare batteries ride along on the right routes. Hot Swap notifies RN Case Planners and family contacts (when consented) that a device changed, keeping everyone aligned on care. Availability and onboarding: IoT Reliability Guardrails are available today for CarePulse customers using supported vitals sensors and activity devices. Agencies can enable Tap Test and PairLock broadly and roll out Auto Rebind and Hot Swap by device family. Implementation takes hours, not weeks, and comes with a quick‑start playbook for IoT Integrators and field staff. Pricing and packaging: Auto Rebind, Tap Test, Offline Buffer, and Sensor Aware are included in the Core platform. Battery Scout, Signal Timeline, Hot Swap, and PairLock are available in the IoT Reliability add‑on bundle. Volume‑based pricing is available for agencies with large sensor fleets. CarePulse is the lightweight, mobile‑first SaaS that centralizes scheduling, documentation, and compliance for home‑health operations managers and caregivers. The platform syncs live routes, auto‑populates notes from short voice clips and optional IoT sensors, and produces one‑click, audit‑ready reports—helping agencies halve documentation time and ensure on‑time, compliant visits. Media contact Elena Park, Head of Communications CarePulse press@carepulse.io +1 415 555 0135 www.carepulse.io

P

CarePulse Expands Just-in-Time Access and Audit Controls for Safe After-Hours Triage

Imagined Press Article

Austin, TX — September 5, 2025 — CarePulse, the mobile‑first platform that centralizes scheduling, documentation, and compliance for home‑health agencies, today announced expanded just‑in‑time access and audit controls designed to keep after‑hours triage fast, safe, and compliant. The release brings together granular permission elevation, shift‑bound tokens, emergency overrides, adaptive MFA, and a real‑time access ledger so teams can resolve urgent issues without exposing protected health information. Night and weekend operations are where policy meets pressure. On‑call supervisors handle exceptions, caregiver substitutions, and safety checks—often from low‑end phones and spotty networks. Too much access puts PHI at risk; too little access slows response. CarePulse’s new controls calibrate access to the task at hand and document every step automatically. The release includes: • JIT Elevate: One‑tap, context‑aware permission elevation that grants the least access necessary for the task—limited to a specific client, chart section, and time window—with auto‑revocation on completion or timer. • Auto‑Expire Guard: Shift‑bound access tokens with idle timeouts and local device timers that revoke access even if a phone goes offline, protecting against forgotten logins and after‑hours drift. • BreakGlass Override: A true‑emergency override with mandatory reason codes, short default durations, and instant supervisor notifications. Access is narrowly scoped, watermark‑tagged, and heavily audited. • Redacted Reveal: Sensitive fields are masked by default and revealed only with a press‑to‑peek that logs who saw what and why—ideal for triage calls and crowded environments. • Role Blueprints: Scenario‑based permission sets (After‑Hours Triage, RN Review, Intake Start‑of‑Care) that align with payer and policy rules, assignable per shift so users see only what’s relevant. • Step‑Up MFA: Adaptive prompts that trigger only when risk rises—new device, unusual time, or elevated scope—supporting biometric, push, and hardware key options for quick verification. • Access Ledger: A human‑readable timeline of who accessed what, when, from where, and under which scope or override, with filters, anomaly highlights, and one‑click, audit‑ready exports. “Fast matters in triage, but so does restraint,” said Maya Chen, CEO of CarePulse. “We designed these controls to let teams act without over‑opening the vault. Every peek, override, and elevation is intentional, time‑boxed, and visible.” Field leaders say the impact is immediate. “I used to keep a ‘kitchen sink’ account for weekends because I couldn’t risk being blocked,” said Andre Morales, an after‑hours supervisor at a multi‑site agency. “Now JIT Elevate gives me what I need for that client and that chart section—nothing more. When the timer ends, it closes itself.” Compliance and QA teams gain continuous visibility instead of retroactive forensics. With Access Ledger, reviewers see the exact sequence of events—who requested elevation, who approved, what was viewed, what changed, and when the window closed—tightened by Step‑Up MFA where risk spiked. Exports align to payer and state expectations, reducing prep for audits and eliminating the need to stitch together logs from multiple systems. The controls pair tightly with everyday workflows. Credential Swap and ETA tools can trigger JIT Elevate when a late‑breaking substitution requires access to a client’s instructions or door codes. Redacted Reveal keeps sensitive fields masked during phone consults, revealing only what’s necessary with a fingerprint or code—perfect for noisy ER settings. If a true emergency arises, BreakGlass Override grants narrowly scoped access with reasons and alerts so supervisors can intervene in real time. Security is balanced with practicality in low‑connectivity situations. Auto‑Expire Guard’s local timers revoke access even if a device drops offline, while in‑progress notes are saved safely and resumable by authorized staff. Step‑Up MFA offers fallbacks that work on low‑end phones without creating lockouts. Availability and onboarding: The expanded access and audit controls are available today for all CarePulse customers. Agencies can start by adopting Role Blueprints for after‑hours teams, then enable JIT Elevate with lightweight approvals for higher‑risk scopes. A guided policy workshop helps align timers, reasons, and reveal rules to each agency’s compliance posture. Pricing and packaging: JIT Elevate, Auto‑Expire Guard, Redacted Reveal, and Step‑Up MFA are included in the Core platform. BreakGlass Override and advanced Access Ledger exports are available in the Compliance tier. Volume discounts apply for multi‑site deployments. “Security that slows care isn’t security—it’s friction,” said Darius Patel, VP of Product at CarePulse. “These controls respect the reality of home‑health work while raising the bar on privacy and accountability.” CarePulse is a lightweight, mobile‑first SaaS that centralizes scheduling, documentation, and compliance for home‑health operations managers and caregivers. The platform syncs live routes, auto‑populates notes from short voice clips and optional IoT sensors, and delivers one‑click, audit‑ready reports—helping agencies halve documentation time and keep visits on‑time and compliant. Media contact Elena Park, Head of Communications CarePulse press@carepulse.io +1 415 555 0135 www.carepulse.io

Want More Amazing Product Ideas?

Subscribe to receive a fresh, AI-generated product idea in your inbox every day. It's completely free, and you might just discover your next big thing!

Product team collaborating

Transform ideas into products

Full.CX effortlessly brings product visions to life.

This product was entirely generated using our AI and advanced algorithms. When you upgrade, you'll gain access to detailed product requirements, user personas, and feature specifications just like what you see below.