Real estate showing management

TourEcho

Instant insight for faster closings

TourEcho is a lightweight showing-management platform for residential listing agents and broker-owners. It schedules showings and captures at-the-door feedback via QR-coded door hangers, then instantly AI-summarizes sentiment and room-level objections. Agents replace scattered texts with clear, actionable readouts, cutting follow-up time 70% and shaving a median six days off market.

Subscribe to get amazing product ideas like this one delivered daily to your inbox!

TourEcho

Product Details

Explore this AI-generated product idea in detail. Each aspect has been thoughtfully created to inspire your next venture.

Vision & Mission

Vision
Transform every listing agent and broker-owner into a confident closer by turning showings into decisive, sales-accelerating insight.
Long Term Goal
Within 4 years, enable 50,000 agents to accelerate 500,000 listings annually, cutting follow-up time 70%, speeding sales cycles 10%, and shaving six days off market across North America and Europe.
Impact
Enables residential listing agents and broker-owners to act on post-tour feedback faster, cutting follow-up time by 70%, increasing seller insight confidence by 40%, and reducing days-on-market by a median 6 days; landlords see 30% fewer repeat showings through clearer objection tracking.

Problem & Solution

Problem Statement
Residential listing agents and small broker-owners waste hours chasing scattered showing feedback across texts, emails, and portals, delaying pricing and staging decisions. Existing showing tools are clunky, costly, or seller-focused CRMs, lacking timely, structured insight after each tour.
Solution Overview
TourEcho centralizes showings and captures post-tour feedback at the door, replacing scattered texts with instant, structured insight. Visitors scan QR-coded door hangers; agents receive AI-summarized sentiment and top objections by room—cutting follow-up time and accelerating pricing and staging decisions.

Details & Audience

Description
TourEcho is a lightweight showing-management platform that schedules showings, automates feedback, and surfaces clear insights. Built for residential listing agents and small broker-owners who need faster, cleaner readouts after every showing. It replaces scattered texts and emails with instant, AI-summarized sentiment and objections, cutting follow-up time by 70% and shaving a median 6 days off market. Distinctive feature: QR-coded door hangers visitors scan to submit mobile feedback tied to each listing.
Target Audience
Residential real estate listing agents and broker-owners (28-55) needing faster feedback, obsessively tracking show performance.
Inspiration
At a crowded Sunday open house, the kitchen bottlenecked; three buyers rubbed hips past the island and whispered the same complaint. The agent, juggling sign-ins and lockbox calls, later got one vague text and stared at a half-empty spreadsheet. That gap sparked TourEcho: a QR-coded door hanger to capture mobile feedback at the door, instantly summarizing sentiment and room-level objections—no chasing, faster decisions.

User Personas

Detailed profiles of the target users who would benefit most from this product.

E

Expansion Ops Ethan

- 38-year-old regional operations director across 4 franchise offices. - MBA; ex-CRM rollout lead managing 12 coordinators, 150+ agents. - Based in Phoenix; travels biweekly to satellite markets. - Comp mix: salary plus adoption and efficiency bonuses.

Background

Cut his teeth standardizing CRM and e-sign across three brokerages. Burned by failed rollouts lacking training and guardrails; now mandates templates, SSO, and measurable adoption before greenlighting tools.

Needs & Pain Points

Needs

1) Company-wide onboarding in under 30 days. 2) Enforced templates, permissions, and SSO. 3) Roll-up analytics with adoption KPIs.

Pain Points

1) Shadow tools creating fragmented, noncompliant data. 2) Training fatigue during tool sprawl. 3) Inconsistent seller communications across teams.

Psychographics

- Worships standardization, abhors rogue workflows. - Craves adoption dashboards and audit trails. - Champions change management with relentless follow-through. - Rejects vendors without enterprise security proof.

Channels

1) LinkedIn ops groups 2) Inman News newsletters 3) YouTube how-tos 4) Slack brokerage workspace 5) Zoom vendor demos

S

Staging Savvy Sofia

- 34-year-old owner-operator, boutique staging studio. - Serves urban listings in Austin and nearby suburbs. - 25–40 projects annually; partners with five listing teams. - Income mix: service fees plus photo package upsells.

Background

Former interior designer frustrated by subjective tastes stalling deals. Adopted data-informed staging after losing two listings to feels small feedback; now insists on structured, room-tagged notes.

Needs & Pain Points

Needs

1) Room objections summarized within hours. 2) Photo hotspots flagged for re-shoots. 3) Shareable action lists across vendors.

Pain Points

1) Vague feedback and conflicting opinions. 2) Delayed notes after weekend showings. 3) Coordination chaos among photographers and movers.

Psychographics

- Believes data-backed design beats gut feelings. - Thrives on quick, visual wins. - Protects brand with flawless execution. - Communicates best via images, checklists.

Channels

1) Instagram reels 2) Pinterest boards 3) WhatsApp vendor chat 4) Canva templates 5) Google Drive folders

C

Copycrafting Cara

- 31-year-old marketing manager, 80-agent brokerage. - Based in Denver; hybrid schedule, campaign owner. - BA Communications; Meta and Google Ads certified. - Manages 30–60 active listing campaigns concurrently.

Background

Cut her teeth in paid social for DTC brands. Joined real estate to blend analytics with narrative; tired of waiting on anecdotal updates, she chases fresh, structured sentiment.

Needs & Pain Points

Needs

1) Top objections and quotes for ad pivots. 2) Bulk export snippets to CMS and ads. 3) Compare sentiment before/after creative changes.

Pain Points

1) Guessing angles without fresh intel. 2) Waiting days for agent updates. 3) Disconnected tools break workflow handoffs.

Psychographics

- Data-first storyteller chasing measurable engagement. - Rejects generic realtor-speak and platitudes. - Lives by calendars, SLAs, and KPIs. - Enjoys testing bold creative pivots.

Channels

1) LinkedIn marketing groups 2) YouTube ad tutorials 3) Facebook Groups real estate marketing 4) HubSpot blog guides 5) Slack marketing team

B

Builder Booth Ben

- 42-year-old community sales counselor, two subdivisions. - Works Thursday–Monday; peak traffic weekends. - Salary plus commission; 25–50 leads weekly. - Reports to division sales manager; uses Salesforce.

Background

Started in retail electronics, mastered crowd flow and demos. Transitioned to builder sales; ditched clipboards after losing hot leads to lobby bottlenecks.

Needs & Pain Points

Needs

1) Slot scheduling that smooths weekend surges. 2) Plan/lot-tagged feedback for follow-up. 3) Offline-friendly QR capture at spotty sites.

Pain Points

1) Lobby pileups and frustrated walk-ins. 2) Paper sign-ins losing leads. 3) Generic feedback, no plan context.

Psychographics

- Lives for orderly, high-throughput weekends. - Prioritizes follow-up lists over long chit-chat. - Pragmatic; values tools that never crash. - Motivated by hitting monthly conversion targets.

Channels

1) LinkedIn builder networks 2) Facebook Groups new home sales 3) YouTube sales training 4) WhatsApp sales team 5) Outlook calendar

P

Portfolio Pricer Parker

- 39-year-old REO/asset manager, national investor. - Oversees 80–150 properties across multiple MSAs. - KPI-driven: DOM, gross-to-list, net recovery. - Heavy Excel/Power BI user; Microsoft-first tooling.

Background

Ex-analyst from a hedge fund vendor desk. Moved to asset management to own outcomes; burned by slow, anecdotal updates from disparate agents.

Needs & Pain Points

Needs

1) Early risk flags on lagging interest. 2) Price-drop and repair recommendations. 3) Bulk reporting for investor committees.

Pain Points

1) Inconsistent, delayed field intelligence. 2) Portfolio blind spots across markets. 3) Negotiation surprises after weeks wasted.

Psychographics

- Ruthlessly pragmatic; decisions must be defensible. - Prefers dashboards over narratives. - Urgency bias; values speed to signal. - Sensitive to carrying costs and leakage.

Channels

1) Outlook email briefings 2) Microsoft Teams channels 3) LinkedIn industry groups 4) Inman Intel newsletter 5) Power BI dashboards

I

Insights Integrator Ivy

- 29-year-old data engineer/analyst, 200-agent brokerage. - Owns Snowflake/BigQuery, dbt, and Airflow pipelines. - Reports to Ops; collaborates with Compliance and Marketing. - Security-minded; manages SSO and role mappings.

Background

Built scrappy MLS and CRM joins that broke with updates. Now standardizes vendor integrations, insisting on versioned schemas, webhooks, and proper observability.

Needs & Pain Points

Needs

1) Stable, well-documented API and webhooks. 2) Sandbox, SDKs, and Postman collections. 3) SSO, SCIM, and granular permissions.

Pain Points

1) Flaky exports and brittle CSV mappings. 2) Opaque rate limits and throttling. 3) Breaking changes without versioning.

Psychographics

- API-first thinker; documentation devotee. - Automates everything repeatable, ruthlessly. - Demands reliability, observability, and support SLAs. - Champions least-privilege access by default.

Channels

1) GitHub repositories 2) Stack Overflow tags 3) Slack dev communities 4) Vendor docs portals 5) Postman templates

Product Features

Key capabilities that make this product valuable to its target users.

Impact Rank

Automatically prioritizes objections by predicted effect on days-on-market, price risk, and buyer sentiment. Applies SLA tiers and suggests due dates so agents and coordinators tackle the highest‑leverage fixes first.

Requirements

Objection Ingestion & Normalization
"As a showing coordinator, I want all objections from showings automatically categorized and de-duplicated so that I can act on a clean, consistent list without manual cleanup."
Description

Automatically ingest objections from QR-coded door-hanger feedback, in-app notes, and imported channels, then de-duplicate, normalize to a controlled taxonomy (price, condition, layout, location, staging, etc.), detect language and translate as needed, and associate each objection to the correct listing, showing, and room. Persist source, timestamps, and responder metadata, perform basic sentiment extraction for pre-scoring context, and expose a clean, structured objection ledger for downstream scoring and prioritization.

Acceptance Criteria
Multi-Channel Objection Ingestion
Given a valid QR door-hanger submission containing listing token, free-text comment, optional room selector, and responder token When submitted Then the system records a new objection with a unique objection_id, status="active", source="QR", and occurred_at within 10 seconds of receipt Given a valid in-app note created by an agent on a showing When saved Then the objection is created with source="InApp" and is associated to the current user_id and showing_id Given a CSV import with ≥100 objections and required columns [listing_id, comment, occurred_at] When processed Then ≥99% of schema-valid rows are ingested; failures are logged with row numbers and error codes; a downloadable error report is produced within 60 seconds Given any submission missing required fields When processed Then it is rejected with 4xx and a machine-readable error code; no partial record is created
Cross-Source De-duplication
Given two objections on the same listing with Levenshtein similarity ≥0.90 OR semantic similarity ≥0.85 within 24 hours and same room (if present) When the second arrives Then it is merged into the existing objection as a new occurrence; occurrence_count increments; merged_from ids are recorded; last_occurred_at updates Given two objections below duplicate thresholds When processed Then both remain as distinct records and appear separately in the ledger Given a merged objection When queried in the ledger Then only one canonical objection row is returned with an occurrences array containing sources and timestamps Given an agent manually marks two objections as not-duplicates When reprocessing Then the pair is exempted from auto-merge for 7 days and the override is auditable
Taxonomy Normalization to Controlled Categories
Given free-text objections using synonyms (e.g., "too expensive", "needs repairs") When normalized Then each maps to one primary category in {price, condition, layout, location, staging, amenities, noise, parking, hoa, schools, safety, other} and optional subcategory; taxonomy_confidence ≥0.80; else category="other" with normalization_reason captured Given profanity or irrelevant text under 3 words with no nouns When processed Then it is categorized as "other" and flagged requires_review=true Given an objection pre-tagged by a user with a valid category When ingested Then the provided category is preserved; if invalid, it is re-mapped to the closest valid category and a warning is logged
Language Detection and Translation
Given a non-English comment When ingested Then language_code (ISO 639-1) is set; translation_en is populated; original_text is preserved; translation provider and detection_confidence are stored; P95 translation latency ≤5 seconds Given a mixed-language comment When ingested Then the dominant language is selected with detection_confidence ≥0.80; if <0.80 then requires_review=true Given an unsupported language When ingested Then language_code is set, translation_en=null, and not_translated=true with a reason code
Association to Listing, Showing, and Room
Given a QR submission with only a signed listing token When processed Then it resolves to the correct listing_id and active showing timeslot; association_confidence ≥0.99 Given an in-app note on a specific showing with a selected room When saved Then the objection is linked to that showing_id and room_id Given ambiguous context (multiple candidate showings within 1 hour) When ingested Then the objection is routed to an Unassigned queue within 10 seconds and a task is created for manual resolution with suggested candidates
Basic Sentiment Extraction
Given an objection text When processed Then sentiment_score in [-1.0, 1.0] and sentiment_label in {negative, neutral, positive} are computed and stored with model_version Given a benchmark test set of 200 labeled objections When scored Then macro-F1 for negative vs non-negative ≥0.80 Given a text shorter than 3 tokens When processed Then sentiment_label="neutral" and sentiment_score ∈ [-0.2, 0.2]
Structured Objection Ledger Exposure
Given an authenticated API client with scope "objections:read" When requesting GET /api/objections?listing_id=...&from=...&to=... Then response is 200 with paginated results including fields [objection_id, listing_id, showing_id, room_id, source, occurred_at, original_text, translation_en, language_code, category, subcategory, taxonomy_confidence, sentiment_score, sentiment_label, occurrence_count, responder_metadata, created_at, updated_at] Given query filters by category, source, language_code, and sentiment_label When requested Then only matching records are returned and pagination metadata (total, page, page_size) is accurate Given a dataset of 10,000 objections When paginating with page_size=100 Then P95 response time ≤300ms and no page returns duplicate or missing entries
Predictive Impact Scoring Model
"As a listing agent, I want each objection scored for its impact on days-on-market, price risk, and sentiment so that I can focus on what will move the listing fastest."
Description

Provide a real-time model that estimates each objection’s predicted impact on days-on-market, price-reduction risk, and buyer sentiment shift. Use historical listing outcomes, comps, market velocity, listing metadata, objection taxonomy, and sentiment strength to compute three normalized scores (0–100) with confidence intervals and guardrails. Include cold-start heuristics, PII-safe processing, and low-latency inference endpoints suitable for interactive UI refreshes and webhook triggers.

Acceptance Criteria
Real-Time Scoring API Latency and Throughput
Given a single objection payload ≤ 10KB with required fields present When POST /v1/impact-score is called Then 95th percentile latency ≤ 250 ms and 99th percentile ≤ 400 ms per tenant over a rolling 5-minute window And the response includes request_id, timestamp, version, three scores, confidence intervals, and confidence And error rate (HTTP 5xx + timeouts) < 1% at 50 requests/second per tenant with 100 concurrent tenants And batch requests up to 50 objections have 95th percentile latency ≤ 700 ms and return per-item results in input order
Normalized Scores, Confidence Intervals, and Guardrails
Given any valid scoring request When scores are returned Then dom_score, price_risk_score, and sentiment_shift_score are integers within [0,100] And for each metric ci_low and ci_high are within [0,100] and satisfy ci_low ≤ score ≤ ci_high And guardrail_flag is true and heuristic_reason is populated when confidence < 0.30 or inputs are out-of-distribution And outputs contain no NaN/Inf values; invalid inputs yield HTTP 400 with field-level errors, not HTTP 500 And increasing sentiment_strength by ≥ 0.2 with all else equal produces a non-increasing price_risk_score and a non-decreasing sentiment_shift_score
Cold-Start Heuristics for Sparse Data
Given a listing with no local comps or historical outcomes and minimal metadata When objections for that listing are scored Then cold_start = true is present and heuristic_source indicates taxonomy/market priors were used And missing optional fields are imputed without errors; required fields are validated with clear messages And unknown taxonomy labels map to "Other" with default priors and confidence ≤ 0.50 And latency and error targets from Real-Time Scoring API Latency and Throughput are met
PII-Safe Processing and Logging
Given input payloads that include PII in free-text (names, emails, phone numbers, precise addresses) When the request is processed and logs are written Then PII is redacted before inference, persistence, and logging, and never present in stored inference payloads And transport uses TLS 1.2+ and at-rest storage is encrypted; access logs omit payload bodies by default And an automated PII scanner over 24 hours of inference logs reports 0 hits for email, phone, SSN, or GPS coordinate patterns And responses contain no PII; only request_id and listing_id identify the object
Webhook and UI Refresh Integration
Given scoring completes for an objection When a webhook subscription exists Then a signed HMAC-SHA256 webhook is delivered within 1 second at p95 and retried up to 5 times with exponential backoff on non-2xx responses And idempotency keys prevent duplicate processing; duplicates within 10 minutes are ignored And the interactive UI re-renders with updated scores within 200 ms of receiving the API response And webhook payload and API response share the same request_id and version
Model Calibration and Backtest Performance
Given a held-out validation set from the most recent 6 months covering target markets When evaluating predictions offline Then AUC for price_reduction_risk ≥ 0.72 against observed price-reduction events And Spearman correlation between predicted DOM impact and observed DOM delta ≥ 0.50 And Pearson correlation between predicted sentiment_shift_score and observed sentiment delta ≥ 0.45 And 80% confidence intervals contain the observed outcome in 70–90% of cases for each metric And these thresholds are met for at least 3 consecutive monthly backtests prior to release
Feature Ingestion and Schema Validation
Given the documented schema includes historical_outcomes, comps, market_velocity, listing_metadata, objection_taxonomy, and sentiment_strength When scoring requests are validated Then requests missing required fields return HTTP 400 with machine-readable error codes and JSON pointers to field paths And unknown fields are ignored without error and are not persisted And masking each feature group to null versus providing representative values changes at least one score by ≥ 1 point on ≥ 80% of a 100-sample sensitivity test set, confirming utilization And taxonomy values not in the controlled vocabulary are mapped to "Other" with a warning in the response
Composite Priority Index & Tuning
"As a broker-owner, I want a configurable priority index that reflects our strategy so that my teams work on what matters most across markets."
Description

Combine individual impact dimensions into a single Priority Index using configurable weights and rule-based overrides (e.g., safety or access issues always escalate to top). Support brokerage-level defaults, per-listing adjustments, and time-decay so unresolved high-impact objections rise in priority. Recalculate on new feedback or status changes, ensure concurrency safety, and expose the index via API and events for integrations.

Acceptance Criteria
Weighted Composite Priority Index Calculation and Normalization
Given brokerage default weights {daysOnMarket:0.5, priceRisk:0.3, buyerSentiment:0.2} and normalized scores {daysOnMarket:80, priceRisk:60, buyerSentiment:40}, When the Priority Index is calculated, Then the result equals 64 and is rounded to the nearest integer. Given a dimension score is missing, When the index is calculated, Then a fallback value of 0 is used for the missing dimension unless a per-listing override exists. Given submitted weights do not sum to 1.0, When saving the configuration, Then the request is rejected with HTTP 422 and error code WEIGHTS_SUM_INVALID. Given submitted weights contain a value outside [0,1], When saving the configuration, Then the request is rejected with HTTP 422 and error code WEIGHT_OUT_OF_RANGE.
Rule-Based Overrides for Critical Categories
Given an objection tagged "safety" or "access", When prioritizing, Then an override is applied setting priorityIndex to 100, rank to 1 within the listing, and overrideReason to the triggering tag. Given multiple objections with override tags, When ranking, Then they are ordered by createdAt ascending, and all appear above non-overridden items. Given the override tag is removed or the objection is resolved, When recalculating, Then the item returns to weight-based ranking with overrideReason cleared.
Config Precedence and Auditability
Given brokerage-level default weights exist and a per-listing weight override is configured, When calculating the index for that listing, Then the per-listing weights are used. Given no per-listing override is present, When calculating the index, Then brokerage-level defaults are used. Given a user without the "ManageBrokerageDefaults" permission attempts to update brokerage defaults, When the request is processed, Then it is rejected with HTTP 403. Given any weights configuration is created or updated, When the change is saved, Then an audit record is written with actorId, scope (brokerage|listing), previousValues, newValues, and timestamp.
Time-Decay Escalation for Unresolved Items
Given a decayRate of +5% per 24h and an unresolved objection with current priorityIndex 60, When 48 hours elapse without status change, Then the recalculated index equals 66 (60 × 1.05 × 1.05 rounded) and does not exceed 100. Given an objection is marked resolved, When recalculating, Then decay stops and the index remains at its current non-increasing value due to decay (no further increases from decay are applied). Given an unresolved objection with priorityIndex 96 and decayRate +5% per 24h, When 24 hours elapse, Then the recalculated index is capped at 100. Given two items with identical base indices but different elapsed times, When recalculating with decay applied, Then the item with greater elapsed time ranks higher.
Recalculation Triggers and Concurrency Safety
Given a new feedback event is recorded for a listing, When the event is persisted, Then the Priority Index for affected objections is recalculated and persisted within 2 seconds p95 and 5 seconds p99. Given two concurrent updates attempt to modify the same weights configuration using the same stale version, When the second update is applied, Then the service responds 409 Conflict with error code VERSION_CONFLICT and no partial write occurs. Given a client retries the update with the latest version returned by the 409 response, When the request is processed, Then the update succeeds with 200/204 and a new version is returned. Given duplicate recalculation events are received for the same change, When processed, Then the outcome is idempotent and the stored index version is not incremented more than once.
API and Event Exposure for Integrations
Given an authorized user calls GET /listings/{listingId}/priority-index, When the listing exists, Then the response is 200 and includes listingId, priorityIndex, components[{dimension,score}], weights, override{applied,reason}, decay{rate,lastAppliedAt}, updatedAt, and version. Given an unauthorized caller requests the endpoint, When processed, Then the response is 403. Given a listing's Priority Index changes, When the change is committed, Then an event priorityIndex.updated is published within 2 seconds containing listingId, objectionId, indexOld, indexNew, rankOld, rankNew, override{applied,reason}, reasonCode, version, occurredAt, and idempotencyKey. Given a webhook subscriber returns non-2xx to the event delivery, When retried, Then deliveries back off exponentially for up to 24 hours and include the same idempotencyKey; after exhaustion the event is moved to a dead-letter queue.
SLA Tiering & Smart Due Dates
"As a coordinator, I want SLA tiers and smart due dates so that I can schedule remedial actions in time to protect the listing timeline."
Description

Map Priority Index to SLA tiers (Critical/High/Medium/Low) with editable policy templates. Generate suggested due dates that account for listing target timelines, business hours, weekends/holidays, vendor lead times, and assignee capacity. Create escalation paths, snooze/deferral rules with justification, and automatic re-targeting when dependencies shift. Surface due dates in UI and sync to connected calendars/task systems.

Acceptance Criteria
SLA Tier Mapping from Priority Index (Policy Templates)
Given an objection with a computed Priority Index and the organization’s active SLA policy template, when the Priority Index is evaluated, then the system assigns a tier of Critical, High, Medium, or Low deterministically based on the template thresholds within 2 seconds. Given the default policy template (Critical ≥ 90, High 70–89, Medium 40–69, Low < 40), when test scores of 95, 75, 55, and 20 are evaluated, then the resulting tiers are Critical, High, Medium, and Low respectively. Given an admin edits tier thresholds in a policy template, when they attempt to save, then the system validates that ranges are contiguous, non‑overlapping, and cover 0–100, else the save is blocked with a clear error. Given multiple policy templates exist, when a listing is assigned a specific template, then all new and updated objections for that listing use that template; if no template is assigned, the org default applies; if the assigned template is invalid, the system falls back to the system default and logs a warning. Given a policy template is updated, when the admin chooses whether to reclassify existing items, then existing items are re-tiered only if “Auto‑reclassify existing” is enabled; otherwise only new/updated items use the new thresholds. Given an objection is tiered, when viewed via API and UI, then the tier value, source template ID/version, and evaluation timestamp are present and audit‑logged.
Due Date Generation Honors Calendars, Timelines, Lead Times & Capacity
Given an objection is tiered and converted to a task, when a suggested due date is generated, then the due date respects the assignee’s time zone, falls within org business hours, and excludes weekends and org holidays by rolling to the next valid business window. Given a listing has a target milestone (e.g., first open house) and the computed due date would land after that milestone, when due date generation runs, then the system proposes the earliest feasible date before the milestone; if infeasible due to constraints, it flags “Target at risk” with rationale. Given the task requires a vendor with a defined lead time and earliest availability, when due date generation runs, then the suggested due date is no earlier than vendor earliest availability plus lead time; if that violates the SLA window, the task is marked “SLA breach risk” and an escalation trigger is queued per policy. Given the assignee has a daily capacity (hours) and existing workload, when due date generation runs, then the task is scheduled into the next available capacity slot within policy constraints; if capacity prevents meeting the SLA, the system suggests the earliest feasible date and recommends alternate assignees (if configured). Given any of business hours, holiday calendar, vendor availability, assignee capacity, or listing milestone changes, when recomputation is triggered, then the suggested due date is recalculated within 15 seconds and the change reason is attached. Given a suggested due date is computed, when presented in UI and API, then the payload includes the due datetime (ISO‑8601), time zone, constraint factors applied (hours, holidays, lead time, capacity), and risk flags (none/breach risk/target at risk).
Escalation Paths for Breach Risk and Overdue
Given an org defines escalation rules per tier (e.g., warn at 50% of SLA, escalate at due, escalate+ at N hours overdue), when a task meets each threshold, then notifications are sent to the configured roles (assignee, coordinator, manager) within 5 minutes with tier, task link, and reason. Given a task is predicted to miss its due date based on capacity or vendor constraints, when prediction detects risk, then a pre‑emptive escalation is sent according to the tier’s pre‑risk rule and the task is labeled “Predicted miss”. Given a task becomes overdue, when the overdue threshold is crossed, then the task status shows “Overdue”, the escalation step increments, and repeated escalations follow the configured cadence until acknowledged or resolved. Given an escalation is sent, when viewed in the audit log, then the log includes timestamp, recipients, channel (in‑app, email), trigger reason, and outcome (delivered, bounced, acknowledged) with user and time of acknowledgment. Given an escalation is acknowledged by an authorized user, when the user acknowledges, then further escalations pause for the policy’s configured grace period unless the due date changes again.
Snooze/Deferral with Justification, Limits, and Approval
Given a user with permission initiates a snooze/deferral on a task, when they submit, then they must choose a policy‑defined reason and enter a justification of at least 10 characters, else the action is blocked with inline errors. Given tier‑based limits exist (max deferral window and max snoozes per task per tier), when a user selects a new due date beyond the allowed window or exceeds the snooze count, then the system requires manager approval and records the pending state; without approval the due date remains unchanged. Given a snooze/deferral is approved, when the new due date is applied, then audit logs capture requester, approver, reason, justification, old/new due dates, and policy that enforced the limits. Given a task is snoozed, when escalations are scheduled, then escalation timers are recalculated according to the new due date and tier rules, and impacted stakeholders receive an update notification. Given a user attempts to snooze a Critical task where policy forbids snoozes, when they submit, then the system blocks the action and displays the policy rule causing the block.
Automatic Re-targeting on Dependency or Timeline Changes
Given a task has dependencies (predecessor tasks or listing milestones), when a dependency date shifts, then the task’s suggested due date is recomputed to occur no earlier than the dependency completion and still within policy constraints; if infeasible, the task is marked at risk and escalations are queued. Given a task has a manually overridden due date, when dependencies shift, then the manual override is preserved if still valid; if invalidated, the assignee is prompted to accept a new suggested date or keep the override with a required justification per policy. Given multiple downstream tasks depend on a shifted milestone, when the milestone date changes, then all affected tasks are recalculated in topological order and updated within 60 seconds, with a single consolidated impact digest sent to assignees and owners. Given re‑targeting adjusts a due date, when viewing the task history, then the event shows the old/new due dates, trigger (dependency change), constraints applied, and the recalculation engine version.
UI Surfacing of SLA Tier and Due Date
Given a user views the Impact Rank queue or task detail, when the item renders, then it displays the SLA tier badge, suggested due date/time with time zone, current risk state, and key drivers (e.g., vendor lead time, capacity) as chips or tooltips. Given policy templates influence results, when the user clicks the tier badge or an info icon, then a popover shows the active policy template name/version and the rule path that determined the tier and due date. Given a user needs to prioritize work, when they sort or filter, then the list can be sorted by tier (Critical→Low) and by due date ascending, and filtered by risk state, tier, assignee, and vendor involvement. Given accessibility needs, when navigating with keyboard or screen reader, then all tier, due date, and risk indicators are accessible with ARIA labels and pass WCAG AA contrast checks.
External Calendar and Task System Sync
Given a suggested due date is created or updated for a task, when sync is enabled and credentials are valid, then an event/task is created or updated in connected calendars/task systems within 60 seconds, with a stable external ID for subsequent updates. Given a task is deleted or completed in TourEcho, when sync runs, then the corresponding external event/task is cancelled or marked complete, and attendees/assignees are notified per integration capability. Given time zones differ between TourEcho and the external system, when the event is created, then the start/end times reflect the assignee’s time zone accurately and match the suggested due date window. Given an integration error occurs (rate limit, auth failure), when sync attempts, then the system retries with exponential backoff up to policy limits and surfaces a visible error to the user with remediation steps. Given an external system imposes working hours constraints, when creating events, then the event respects those constraints where the API supports it; otherwise the description includes a note of conflicts and the item is flagged in TourEcho for review.
Impact Work Queue & Notifications
"As an agent, I want a prioritized task queue with alerts so that I never miss high-impact fixes or SLA deadlines."
Description

Deliver a role-aware, prioritized work queue showing highest-leverage objections per listing and portfolio. Enable assignment, bulk actions, and status transitions (open/in-progress/resolved/blocked) with links to evidence and remediation suggestions. Send real-time and digest notifications via email/SMS/push for new critical items, upcoming SLA deadlines, and escalations, with granular user preferences.

Acceptance Criteria
Role-Aware Prioritized Work Queue (Listing View)
- Given an authenticated user with role Agent, Coordinator, or Broker-Owner, When opening a listing’s Impact Queue, Then only objections the role has permission to view/edit are shown. - Given queue data exists, When the page loads, Then items are sorted by Impact Rank descending; if scores tie, then sort by SLA due date ascending; if still tied, by last activity descending. - Given any item row, When rendered, Then it displays columns: Objection, Impact Score (0–100), SLA Tier, Suggested Due Date (TZ-aware), Status, Assignee, Last Activity, and Actions. - Given a user changes sort or filters (status, assignee, SLA tier), When returning within 7 days, Then those preferences are persisted for that user and view.
Portfolio View With Cross-Listing Prioritization
- Given a user with access to 2+ listings, When opening the Portfolio Impact Queue, Then objections across all accessible listings are displayed in a single prioritized list sorted by Impact Rank then SLA due date. - Given filters are applied (listing, team member, market, status), When applied, Then only matching items are shown and a result count is displayed. - Given more than 100 items, When viewing, Then pagination or infinite scroll is available and indicates total pages or items; navigating does not lose filters.
Assignment & Bulk Status Transitions
- Given one or more items are selected, When assigning to a user, Then the assignee is updated, an audit entry is stored with actor, timestamp, and previous assignee, and the assignee’s notification preferences are applied. - Given 1–200 items selected, When applying a bulk action (change status to Open, In‑Progress, Resolved, Blocked or assign), Then each item updates independently; partial failures are reported per item with reason; successful updates are not rolled back. - Given an item in any status, When a user attempts an invalid transition (e.g., Resolved -> In‑Progress without reopening), Then the action is blocked and an explanatory message is shown.
SLA Tiers, Suggested Due Dates & Escalations
- Given a new Critical objection is created, When it is ranked, Then an SLA Tier is assigned (Critical, High, or Standard) and a suggested due date is computed and displayed within 60 seconds. - Given an item within 24 hours of its suggested due date, When viewed, Then it is visually flagged and appears in “Upcoming SLA” filters. - Given an item past due by more than 60 minutes, When conditions persist, Then an escalation flag is set and an escalation notification event is emitted once per 6 hours until status is In‑Progress or Resolved.
Evidence Links & Remediation Suggestions
- Given an item has captured feedback, When viewing its details, Then at least one evidence link (QR response, photo, or transcript) is present and opens successfully in a new tab with a 200 response. - Given remediation guidance is available, When viewing the item, Then at least one suggestion with rationale and estimated impact is displayed and includes a one-click action to mark as Applied. - Given a broken or unauthorized evidence link, When clicked, Then the user sees an in-app error state and a Request Access/Retry option; the error is logged for support.
Notifications: Real-Time Alerts & Daily Digest
- Given a user is subscribed to real-time alerts, When a new Critical item is created or an item enters SLA due in less than 24 hours or escalates, Then an alert is delivered via the user’s enabled channels (email, SMS, push) within 60 seconds. - Given multiple events for the same item within 5 minutes, When alerts are generated, Then they are de-duplicated into a single alert per channel with a roll-up of event types. - Given a user is subscribed to a daily digest, When the scheduled time occurs, Then a digest is delivered within 5 minutes containing the top 10 items by Impact Rank, items due in next 24 hours, and new escalations, with deep links that open the corresponding item.
User Preferences for Notification Channels & Quiet Hours
- Given a user opens Notification Preferences, When configuring, Then they can toggle per-event-type (New Critical, Upcoming SLA, Escalation, Assignment) per channel (email, SMS, push). - Given a user sets quiet hours and a time zone, When events occur during quiet hours, Then alerts are suppressed until quiet hours end unless Allow Escalation Overrides is enabled; timestamps and due dates display in the selected time zone. - Given preferences are changed, When saved, Then changes take effect for subsequent events and are auditable with actor and timestamp.
Explainability & Audit Trail
"As an agent, I want to see why an objection is ranked where it is so that I can confidently explain priorities to sellers and make informed trade-offs."
Description

Provide transparent rationale for each rank: top contributing factors, exemplar feedback snippets, comparable-listing context, and confidence level. Maintain a tamper-evident audit log capturing score changes, rank shifts, SLA updates, model version, and weight configurations. Support CSV/JSON export, role-based access to sensitive details, and seller-shareable summaries to build trust.

Acceptance Criteria
Agent views rationale for a ranked objection
Given an authenticated listing agent is viewing a ranked objection in Impact Rank When they open the "Why this rank?" panel Then they see: (1) Top contributing factors (minimum 3) with normalized weights summing to 100% and short factor definitions; (2) At least 2 exemplar feedback snippets with timestamps and masked respondent identifiers; (3) Comparable-listing context with at least 3 comps within the configured radius/time window (default 2 miles/90 days) including price, DOM, and similarity score; (4) A confidence level displayed as 0–100% plus a qualitative label (Low/Medium/High) and the compute timestamp; (5) A link to the audit trail for this objection And all fields are non-null for computed ranks and render within 1.5s for the 95th percentile of requests
Tamper-evident audit log of rank and SLA changes
Given any event that affects an objection’s ranking or metadata (score recalculation, rank shift, SLA tier change, due date change, model version update, weight configuration change) When the event is committed Then an append-only audit entry is created capturing: event_type, actor_or_system_id, UTC timestamp, pre_value, post_value, model_version, config_snapshot_id, and reason And the entry includes a hash_chain value = SHA-256(prev_hash + entry_body) And attempts to alter or delete an entry are rejected; instead a new corrective entry is appended and a tamper_alert flag is recorded And the audit trail can be queried by objection_id, date range, or event_type, returning results chronologically with contiguous hash verification status (PASS/FAIL)
CSV and JSON export of rationale and audit trail
Given a user with export permission selects Export When they choose CSV or JSON, a date range, and data scope (Rationale, Audit, or Both) Then a downloadable file is generated with the following schemas: - Rationale: objection_id, listing_id, rank, score, top_factors[], snippets[], comps[], confidence_percent, confidence_label, computed_at_utc, model_version, config_snapshot_id - Audit: event_id, objection_id, event_type, actor_or_system_id, ts_utc, pre_value, post_value, prev_hash, hash And sensitive fields are masked or excluded according to role policy And the export completes within 10 seconds for up to 10,000 audit events and streams for larger sets And a SHA-256 checksum for the exported file is provided
Role-based access to sensitive details
Given roles exist: Agent, Coordinator, Broker_Admin, Seller_Link, Auditor When users access rationale details, audit trail, or exports Then access is enforced as: (1) Seller_Link: summary only (top objections, reasons, suggested actions, confidence), no raw snippets, no respondent IDs, no comp addresses, no audit access; (2) Agent/Coordinator: masked respondent IDs and masked device/IP; audit view allowed; (3) Broker_Admin and Auditor: full fields And unauthorized access attempts return HTTP 403 and are logged with user_id, resource, and timestamp And policy updates propagate within 5 minutes and are reflected in subsequent requests
Seller-shareable summary link
Given an agent generates a seller-shareable summary for a listing When the link is opened by a recipient Then the page shows: top 3 ranked objections, plain-language reasons, suggested actions with due dates (from SLA), and confidence labels And no raw feedback text or identifying metadata is displayed; only paraphrased snippets are used And the link is tokenized, read-only, expires by default in 30 days (configurable), can be revoked at any time, and all accesses are logged
Traceability of model version and weight configurations
Given a rank is computed or recomputed for any objection When the computation completes Then the result stores and displays: model_version (semver + git/hash), feature_weight_config_id, data_snapshot_ts_utc, and training_data_cutover_date And these fields are visible in the rationale panel, included in exports, and captured in each relevant audit entry And any change to model_version or feature_weight_config creates a system audit event summarizing the number of affected objections
Outcome Tracking & Model Calibration
"As a broker-owner, I want to measure outcomes and calibrate the model so that Impact Rank continuously improves and proves ROI."
Description

Track post-remediation outcomes (DOM delta, price changes, offer activity, sentiment improvement) and resolution status of each objection. Attribute realized impact to actions where possible, compare predicted vs. actual, and feed results into periodic model retraining and weight tuning. Provide A/B testing of weighting policies and monitoring dashboards for performance, drift, and SLA adherence.

Acceptance Criteria
Post-Remediation Outcome Metrics Recorded per Objection
Given an objection has at least one remediation action and the assignee marks the action "Completed" When the completion event is saved Then an initial outcome record is created within 1 hour with fields: listing_id, objection_id, action_id(s), completion_timestamp, calculation_version, and baseline metrics initialized. And the outcome record is refreshed within 1 hour of new data events (new showing feedback, offer logged, price change, contract) and at least nightly for 30 days or until contract, whichever comes first. Then the outcome record contains: DOM_remaining_predicted_at_completion (days), actual_days_to_contract_from_completion (days), DOM_delta = actual - predicted (days), price_change_since_completion (amount and direction), offers_since_completion (count), average_sentiment_delta (post-remediation vs prior 7-day window). And ≥95% of completed remediations have a non-stale (≤24h) outcome record. And the outcome record is retrievable via API and UI filters by listing, objection type, action type, and date range.
Objection Resolution Status Lifecycle & SLA Tracking
Given a new objection is created by Impact Rank When it is saved Then its status defaults to "Open" and receives an SLA due date per tier policy. Given an owner accepts a remediation task When work begins Then status transitions to "In Progress" and start_timestamp is recorded. Given a blocker is recorded When status is updated Then status may transition to "Blocked" with required blocker_reason and next_review_date. Given all linked remediation actions are completed When the owner marks resolution Then status transitions to "Resolved" and resolved_timestamp is recorded. Given outcome metrics confirm improvement or acceptance criteria are met When a reviewer verifies Then status transitions to "Verified". Then any status may transition to "Won't Fix" with mandatory rationale. And only the following transitions are allowed: Open→In Progress, In Progress→Blocked, Blocked→In Progress, In Progress→Resolved, Resolved→Verified, any→Won't Fix. And SLA adherence is calculated as percentage of objections reaching Resolved by due date; this metric is visible per listing and portfolio. And audit log captures user, timestamp, old_status, new_status, and notes for every transition.
Impact Attribution to Remediation Actions
Given an outcome record is being finalized for an objection When attribution is computed Then the default mode is single-touch last-touch within a 14-day post-completion window, assigning 100% impact to the most recent completed action linked to the objection. Given multiple actions are completed within the window and multi-touch mode is enabled When attribution is computed Then impact is apportioned proportionally to each action’s predicted impact weight normalized to sum to 1. Then the outcome record stores attribution_method (last_touch|multi_touch|ambiguous), action_ids, and fractional_contributions. And if no eligible actions exist in the window Then attribution_method is set to "ambiguous" and fractional_contributions are empty. And ≥99% of outcome records have a non-null attribution_method field. And the attribution window length and mode are configurable per workspace and are recorded in calculation_version.
Predicted vs Actual Impact Comparison & Calibration Metrics
Given an objection is first ranked by Impact Rank When the rank is generated Then the system snapshots predicted components with version stamps: predicted_DOM_reduction (days), predicted_price_risk_change, predicted_sentiment_lift, and predicted_offer_rate_delta. Given the objection’s outcome record is finalized (contract reached or 30 days elapsed post-completion) When comparison runs Then the system computes and stores: signed_error_dom_days, absolute_error_dom_days, sentiment_lift_error, offer_rate_delta_error, and price_risk_direction_accuracy (0/1). Then weekly aggregates are produced per market, listing segment, and objection type: MAE_dom_days, MAPE_sentiment_lift (where denominator ≠ 0), accuracy_price_risk_direction, and calibration bins. And a drift alert is opened if 4-week rolling MAE_dom_days increases by >20% vs the prior 4 weeks or price_risk_direction_accuracy drops below 0.6. And the UI dashboard displays prediction vs actual distributions, calibration curves, and error trends, filterable by model_version and calculation_version.
Scheduled Model Retraining & Safe Deployment of Weights
Given the weekly retraining window opens When at least 200 new finalized outcomes exist since the last training and feature schema is unchanged or migrated with tests Then the pipeline trains candidate models/weighting policies with fixed random seeds and logs data/feature hashes. Then holdout evaluation must meet guardrails: MAE_dom_days ≤ current_prod_MAE_dom_days × 0.95 OR within 1% while improving calibration slope toward 1.0; price_risk_direction_accuracy ≥ current - 0.02. When guardrails pass Then the candidate is shadow-deployed to 100% of traffic for inference-only logging for ≥48 hours and must not violate latency p95 > 300ms or error rate > 0.5%. When shadow metrics pass Then canary rollout to 10% of listings occurs for ≥72 hours with automatic rollback on SLA on-time rate drop >5% absolute versus control. Then upon promotion, model_version and weight_version are incremented, artifacts are versioned and reproducible, and change log entry with approver is recorded.
A/B Testing of Weighting Policies for Impact Rank
Given an experiment is created to compare weight policies When listings are enrolled Then randomization unit is listing_id with stratification by market and price band; contamination between arms is ≤2%. Then minimum sample size is ≥200 listings per arm over ≥14 days or until reaching 80% power for detecting a 10% reduction in time-to-resolution of top-3 objections. Then primary success metric is median time-to-resolution for top-3 objections; secondary metrics include SLA on-time rate, offer_rate within 14 days, and prediction MAE_dom_days. Then the winner is declared only if primary metric improves with p<0.05 and no secondary guardrail degrades by >5% relative. And a harm stop triggers if SLA on-time rate degrades by >10% absolute for any arm. And experiment status, assignment, metrics, and final decision are visible in UI and exportable via API.
Monitoring Dashboards for Performance, Drift, and SLA Adherence
Given production is running When dashboards load Then they display at minimum: MAE_dom_days (global and by segment), price_risk_direction_accuracy, sentiment_lift_R2, attribution coverage (% outcome records with non-null attribution), SLA on-time %, backlog aging, data latency from event to outcome update, and PSI for top 10 features. Then tiles update at least hourly for operational metrics and daily for modeling metrics, with historical trends up to 12 months. Then alerting thresholds are configured: PSI > 0.2, SLA on-time % < 90%, missing outcome rate > 5%, latency p95 > 1 hour; alerts page on-call and open incidents with timestamps. And dashboards support drill-down by market, agent, listing, model_version, and date range, and allow CSV export respecting workspace permissions.

ROI Gauge

Pairs each auto-task’s cost range with projected payoff—expected showing lift or days-on-market saved—so agents can justify spend to sellers and broker-owners approve with confidence.

Requirements

Task Cost Normalization & Sources
"As a listing agent, I want accurate, market-specific cost ranges for tasks so that I can present realistic budgets and ROI to sellers."
Description

Calculates and maintains cost ranges for each auto-task by integrating partner price lists and marketplace SKUs, enabling market-specific defaults, manual overrides, and currency/tax handling. Stores versioned cost baselines by MLS/ZIP, applies discounts/fees, and exposes a consistent cost API for the ROI Gauge and proposal builder. Includes permissioned admin tools, audit logs, and scheduled refresh from providers to keep ranges current.

Acceptance Criteria
Provider Price List Ingestion & SKU Mapping
Given a provider price list file (CSV or JSON) containing at least 10,000 rows and valid marketplace SKUs accessible via HTTPS or SFTP When the ingestion job is triggered manually or by schedule Then the system validates required fields (providerId, SKU, taskId, currency, basePrice, effectiveStart) and rejects rows missing any required field with a downloadable error report And successful rows are imported and mapped to auto-tasks And duplicate rows (same providerId, SKU, effectiveStart) are de-duplicated idempotently And a run processing 50,000 rows completes in under 5 minutes And conflicts between multiple SKUs for the same task are resolved by configured provider precedence rules And re-running the job with the same input produces no changes (idempotent) And the job emits metrics (processed, imported, rejected) and sends an alert on any provider-level failure
Market-Specific Baseline Versioning by MLS/ZIP
Given an MLS ID and ZIP code and a request timestamp When requesting cost baselines for a task Then the effective version where effectiveStart <= timestamp < effectiveEnd is returned And the response contains minCost, maxCost, medianCost, currency, versionId, and source And ZIP-scoped baseline is used when present, else MLS-scoped, else Global default And creating a new baseline for the same scope increments versionId and preserves prior versions retrievable by versionId And changing the request timestamp to a prior period returns the correct historical version And 100 concurrent reads return consistent results with no partial updates
Currency & Tax Normalization
Given provider costs in various currencies and tax inclusivity flags When normalizing to a target currency and region (MLS/ZIP) Then the latest FX rate not older than 24 hours is used, or the last-known-good rate with a warning flag if newer is unavailable And amounts are rounded using bankers' rounding to 2 decimals unless the currency is zero-decimal (e.g., JPY) And region-specific tax (VAT or sales tax) is applied correctly based on taxInclusive flag And currency codes comply with ISO 4217 And the API returns totalTaxAmount and effectiveTaxRate fields alongside normalized costs
Discounts and Fees Application Rules
Given a base cost, a provider discount schedule (e.g., 10%), a promo code (e.g., $20 off with $100 min), and a platform service fee (e.g., 5% with $3 min and $15 max) When calculating the final cost range Then the stacking order is base -> provider discount -> promo -> service fee -> taxes And caps and floors are enforced per rule definition And negative totals are prevented (minimum $0) And discounts respect effective windows and scopes (provider, MLS, ZIP) And the computed totals match a reference calculation within $0.01 And the API returns a per-component breakdown (base, discounts, fees, taxes)
Manual Override with Permissions & Audit
Given a user with the PricingAdmin role and MFA enabled When they create a manual override for taskId T scoped to MLS M and ZIP Z with an expiry date Then the override is saved with scope, reason, createdBy, createdAt, and expiresAt And an immutable audit log entry is captured with before/after values, userId, timestamp, and IP address And non-admin users receive 403 on create/update/delete override attempts And overrides take precedence over baselines until expiry And expired overrides auto-revert to the latest baseline and are marked inactive in history And API responses include source="override" and overrideId when an override is applied
Scheduled Provider Refresh & Fallback
Given providers A and B configured with daily refresh at 02:00 local data center time When the scheduled job runs Then successful fetches update baselines and version history per scope And transient failures trigger up to 3 retries with exponential backoff And persistent failures retain last-known-good data and raise alerts via email/Slack within 5 minutes And the end-to-end refresh completes within 60 minutes And job metrics (duration, providers succeeded/failed, rows processed) are recorded and viewable in admin tools
Cost API Contract for ROI Gauge & Proposal Builder
Given a client requests GET /costs?taskId=T&mlsId=M&zip=Z&currency=USD When parameters are valid and data exists Then the API returns 200 within 300ms p95 under 100 RPS with JSON containing taskId, sku, source, currency, minCost, maxCost, medianCost, taxes, fees, discounts, effectiveVersion, effectiveAt, and expiresAt And responses include an ETag header and If-None-Match results in 304 when unchanged And invalid parameters return 400 with a machine-readable error code and message And missing coverage returns 404 with guidance to available scopes And CORS allows origins used by ROI Gauge and the proposal builder And additions to the response are backwards-compatible with no breaking changes to existing keys
Payoff Prediction Engine
"As a broker-owner, I want objective payoff predictions for each task so that I can approve spend based on data rather than anecdotes."
Description

Predicts expected showing lift and days-on-market saved per task and per bundle using historical listing performance, feedback signals, seasonality, and comparable inventory. Produces point estimates with confidence intervals, segmented by price band and neighborhood, with cold-start heuristics for low-data markets. Provides a versioned model registry, automated retraining, and a secure API consumed by the ROI Gauge and scenario planner.

Acceptance Criteria
Prediction API: Task-Level Outputs
Given a valid request with listing_features, market_id, task_id, price, neighborhood, and as_of_date When POST /v1/predictions/tasks is called Then the response is 200 OK with JSON containing showing_lift.point, showing_lift.ci95_low, showing_lift.ci95_high, dom_saved.point, dom_saved.ci95_low, dom_saved.ci95_high, and model_version And numerical fields are finite numbers with showing_lift.point between -1.0 and 3.0 (i.e., -100% to +300%) and dom_saved.point between -30 and 60 days And p95 latency is <= 500 ms for requests with <= 10 task_ids And missing required fields returns 422 with a descriptive error per field And unknown task_id returns 404 And responses include request_id for traceability
Bundle Predictions: Joint Effect Computation
Given a valid request with listing_features, market_id, bundle_id, and an array of task_ids (size 2–10) When POST /v1/predictions/bundles is called Then the response is 200 OK with JSON containing combined.showing_lift, combined.dom_saved, and per_task_contributions[] that sum to the combined point estimates within ±1e-6 And the response includes interaction_assumption set to "joint" And combined point estimates are >= max of individual task point estimates and <= sum of individual point estimates And p95 latency is <= 800 ms for bundles up to 10 tasks And invalid bundles (duplicate or unknown task_ids) return 422 with details
Segmented Outputs by Price Band and Neighborhood
Given a request specifying segment_by=["price_band","neighborhood"] and valid listing_features When POST /v1/predictions/tasks is called Then the response includes an array of segments each with keys price_band, neighborhood, showing_lift.{point,ci95_low,ci95_high}, dom_saved.{point,ci95_low,ci95_high} And segments with support_count >= 50 return model-based estimates; segments with support_count < 50 return fallback estimates with fallback_level in {neighbor_price_band, market, region, national} And all returned segments include support_count and fallback_level And no segment returns empty/null estimates; CI widths for fallback segments are >= 1.5x the median CI width of supported segments And the API returns 400 if segment_by contains unsupported dimensions
Cold-Start Heuristics for Low-Data Markets
Given market_id with < 50 comparable listings in the last 90 days When any prediction endpoint is called Then the response includes method="heuristic" and data_regime="cold_start" And estimates are produced using region-level priors with CI widths at least 2.0x the warm-start median for the same price band And no request fails with 5xx due to insufficient data And a telemetry event cold_start_used=true is emitted with market_id and model_version
Model Registry and Versioning
Given a completed training run When the training pipeline finishes Then a new immutable model entry is registered with semantic version MAJOR.MINOR.PATCH, training_window_start/end, feature_schema_hash, code_commit_sha, hyperparameters, and evaluation_metrics And GET /v1/models/{version} returns the exact metadata and artifact checksums And predictions include model_version matching a registry entry And attempting to modify an existing version returns 409 Conflict And registry retains at least the last 20 versions and supports GET /v1/models?status=promoted|staging
Automated Retraining and Promotion Guardrails
Given scheduled retraining configured as a weekly cron When a new model is trained Then backtests on holdout data achieve MAPE_showing_lift <= 25% and MAE_dom_saved <= 4.0 days and coverage of CI95 between 92% and 98% And only if thresholds pass is the model auto-promoted to promoted status; otherwise it remains in staging and alerts are sent to #ml-ops and on-call PagerDuty And data drift detection (PSI > 0.2 for any key feature) triggers an unscheduled retrain within 24 hours And promotion writes an audit log entry with approver (service) and timestamp
Secure API Access and Compliance
Given a client with valid OAuth2 client_credentials and scope roigauge:predict When calling any /v1/predictions endpoint over TLS Then the connection negotiates TLS >= 1.2 and returns 200 for authorized clients and 401/403 for invalid/missing credentials And PII (owner name, phone, email) is neither accepted nor logged; requests are rejected with 422 if such fields are present And per-client rate limiting is enforced at 100 RPS with p99 latency <= 1s under limit and 429 when exceeded And access logs include request_id, client_id, model_version, and omit payload bodies
ROI Gauge UX & Inline Surfacing
"As a listing agent, I want an at-a-glance ROI gauge inside my workflow so that I can quickly decide which tasks to include."
Description

Renders a compact, responsive gauge for each auto-task showing cost range, predicted showing lift, expected days-on-market saved, and net ROI with color-coded confidence. Embeds within Auto-Tasks, Proposal Builder, and Seller Report views; supports keyboard navigation, mobile layouts, and tooltip explanations. Provides click-through to methodology, confidence intervals, and assumptions, and flags low-confidence markets with clear messaging.

Acceptance Criteria
Gauge Renders With Required Metrics
Given a valid auto-task with cost range, predicted showing lift, expected days-on-market saved, net ROI, and confidence inputs When the ROI Gauge is rendered Then the gauge displays cost as a locale-formatted currency range (min–max) And displays predicted showing lift as a signed percentage with one decimal And displays expected days-on-market saved as an integer day count And displays net ROI as a signed percentage with one decimal And confidence is color-coded: ≥80% green, 50–79% amber, <50% red And negative ROI values render with a leading minus and red emphasis And missing inputs show "—" placeholders with an accessible tooltip explaining missing data
Inline Embedding Across Core Views
Given the Auto-Tasks, Proposal Builder, and Seller Report views When each view loads with at least one auto-task card present Then the ROI Gauge appears inline within each card without horizontal scroll at viewport widths ≥320px And each instance inherits the view’s typography and spacing tokens And gauge rendering is asynchronous and does not increase page LCP by more than 100ms compared to baseline And each gauge instance includes a unique DOM data-id attribute for analytics
Keyboard Navigation and Screen Reader Support
Given a keyboard-only user When tabbing through the auto-task card Then the ROI Gauge is reachable after the card title and before the primary action And all interactive elements (tooltip trigger, methodology link) are focusable And Enter/Space toggles the tooltip, Enter activates the methodology link And Esc closes any open tooltip or modal and returns focus to the trigger And there are no focus traps Given a screen reader user When the gauge is announced Then the gauge has role="group" with accessible name "ROI gauge" And ARIA labels announce each metric with units and confidence as text And color meaning is not the sole conveyer of information
Tooltips Provide Inline Explanations
Given a user hovers, focuses, or taps the info icon next to each metric When the tooltip opens Then the tooltip contains a ≤160-character explanation of how that metric is calculated And tooltips are positioned within viewport with collision handling And tooltips dismiss on Esc, click/tap outside, or moving focus And on mobile, tooltips open on tap and close on second tap or backdrop tap
Click-through to Methodology and Confidence Details
Given a user activates the "Methodology & assumptions" link from the gauge When the destination opens Then it displays model version, data sources, key assumptions, and 95% confidence intervals for showing lift and days-on-market saved And the destination is reachable via keyboard and has a unique URL for deep-linking And returning from the destination restores focus to the originating gauge And an analytics event "roi_gauge_methodology_viewed" fires with the gauge instance ID
Low-Confidence Market Flagging
Given the market confidence score for the listing is <50% When the ROI Gauge renders Then a visible warning appears within the gauge stating "Low confidence for this market—interpret cautiously" And the net ROI value is visually de-emphasized and accompanied by a warning icon with aria-label And the Proposal Builder export includes the same warning text And the Seller Report view uses neutral, seller-friendly wording without internal jargon
Responsive & Mobile Layout
Given viewport widths of 320px, 375px, 768px, and ≥1024px When rendering the gauge Then the component fits its container without horizontal scroll And at ≤375px the gauge stacks metrics vertically with a maximum of two lines per metric And touch targets are ≥44×44 dp and body text ≥12pt with ≥4.5:1 contrast And the gauge renders within 250ms after data is available on a mid-tier mobile device
Scenario Comparison & Budget Planner
"As a listing agent, I want to compare different task bundles against a budget so that I can choose the highest-impact plan for my seller."
Description

Enables users to toggle multiple tasks, set a budget cap, and compare projected outcomes, accounting for diminishing returns and task interactions. Recommends an optimal stack via constrained optimization, displays marginal ROI per addition, and allows saving, sharing, and exporting the selected plan back to the auto-task queue.

Acceptance Criteria
Real-Time Task Toggle Outcome Update
- Given a loaded listing baseline and at least five candidate tasks are visible When the user toggles any task on or off Then projected total cost, showing lift, and days-on-market saved recalculate within 1 second and reflect the current selection - And Then each metric delta equals the model-computed marginal impact for that task given the current stack within rounding tolerance (±0.1 showings, ±0.1 days, ±$1) - And Then a recalculation timestamp updates to the current time
Budget Cap Enforcement and Guidance
- Given the user sets a budget cap of $X When the cumulative selected task cost exceeds $X Then the UI displays an over-budget warning with the overage amount to the dollar and disables Save and Export actions - When the user clicks "Optimize to Cap" Then a suggested plan is produced with total cost ≤ $X and the over-budget state clears - When the cumulative cost is ≤ $X Then Save and Export actions are enabled
Diminishing Returns Modeling for Overlapping Tasks
- Given two or more tasks classified within the same impact category (e.g., exposure lift) When tasks are added sequentially Then the marginal improvement in the selected objective for the k-th task is ≤ the marginal improvement for the (k−1)-th task, for all k ≥ 2 - And Then the displayed marginal ROI per dollar for each additional overlapping task is non-increasing - When any task is removed from the stack Then the marginal impacts of remaining tasks are recalculated to reflect the updated stack
Interaction Effects Visibility and Calculation
- Given a defined interaction exists between Task A and Task B When both Task A and Task B are selected Then the combined projection equals baseline + effect(A) + effect(B) + interactionTerm within rounding tolerance and an Interaction row displays the signed interaction value - And Then an info tooltip describes the interaction and references the model/dataset version - When only one of the interacting tasks is selected Then no interaction row is displayed
Constrained Optimization Recommendation
- Given a budget cap and a set of candidate tasks with cost ranges and modeled effects When the user clicks "Recommend Plan" Then within 3 seconds the system returns a plan that maximizes the selected objective (Showings Lift or Days Saved) subject to total cost ≤ cap, honoring task dependencies and mutual exclusions - And Then ties are broken by greater objective gain, then lower total cost, then fewer tasks - When no feasible plan exists under the cap Then a "No feasible plan" message displays with the minimum cost required for feasibility
Marginal ROI Display and Ordering
- Given a current stack (recommended or manual) When the user opens the Marginal ROI panel Then each task displays its marginal outcome delta and marginal value per dollar relative to its position in the stack, plus a cumulative totals row - When the user reorders tasks via drag-and-drop Then marginal values recompute within 1 second and cumulative totals remain unchanged - And Then units are clearly labeled (e.g., +0.8 showings/$100, −0.3 days/$100)
Save, Share, and Export Plan to Auto-Task Queue
- Given a valid plan within the budget cap When the user clicks Save Then the plan is persisted with name, cap, selected objective, ordered tasks, costs, projected outcomes, marginal ROI series, model version, and timestamp, and a unique plan ID is returned - When the user clicks Share Link Then a view-only link is generated that renders the saved plan and expires by default after 30 days - When the user clicks Export to Queue Then corresponding auto-tasks are created in the task queue in the same order with assigned budgets and task IDs, and a success confirmation shows the number created with any failures listed by task
Seller Approval & Shareable ROI Report
"As a listing agent, I want a shareable ROI report that captures approval so that I can get green lights faster and move to execution."
Description

Generates a seller-friendly summary with ROI gauges per task, plain-language rationale, total estimated cost, and projected impact, with one-click sharing via link, email, or PDF. Captures seller e-sign or broker approval, maintains an audit trail, and synchronizes approval status to each auto-task for execution and compliance.

Acceptance Criteria
Report Generation: ROI Gauges, Rationale, Cost, Impact
Given a listing with at least one configured auto-task that has a cost range and impact estimate, when the agent clicks "Generate Seller ROI Report", then the system produces a report that includes for each auto-task: an ROI gauge displaying the cost range, projected payoff (expected showing lift or days-on-market saved), and a plain-language rationale tied to the task. And the report displays total estimated cost as a range equal to the sum of minimum and maximum task costs across included tasks. And the report displays total projected impact with the aggregation method labeled (e.g., "Sum of DOM saved"), excluding any tasks missing impact from totals. And the report header includes listing address, MLS ID (if present), agent name, brokerage, and generation timestamp in the account timezone. And the report renders within 3 seconds for up to 50 tasks.
One-Click Sharing via Link, Email, and PDF
Given a generated report, when the agent selects "Copy Share Link", then the system creates a unique URL with at least 22 bytes of entropy, sets a default expiration of 14 days, and copies it to the clipboard. And opening the share link on mobile or desktop shows a read-only view with ROI gauges, totals, and rationale identical to the report content. When the agent selects "Email", then the system sends the report to specified recipients within 60 seconds and logs an "Email Sent" audit event with a hash of recipient addresses. When the agent selects "Download PDF", then a PDF matching on-screen content and pagination is produced within 10 seconds and the file size is ≤ 5 MB for up to 10 pages. And every share action (link, email, PDF) creates an audit entry with action, actor, timestamp, and channel.
Seller and Broker Approval via E‑Sign
Given a shared report, when a seller or broker clicks "Approve" and completes the e‑sign flow, then the system records signer full name, role (seller or broker), email, timestamp (UTC ISO 8601), IP address, and attaches a tamper‑evident certificate PDF to the report. And if multiple signers are configured, the report status remains "Pending Approval" until all required signers have completed; upon all signatures, status becomes "Approved". And if any signer declines, the report status becomes "Rejected" with a required decline reason captured and visible to the agent. And the signature verification endpoint returns 200 for valid signatures and 422 for altered or unverifiable documents.
Auto‑Task Status Synchronization on Approval
Given a report with N auto‑tasks where requireApproval=true, when the report status changes to "Approved", then each such auto‑task transitions to "Approved" within 60 seconds and becomes eligible for execution. And if the report status is "Rejected", all requireApproval tasks move to "Rejected" and are blocked from execution. And if approval is "Revoked" by the agent, all previously approved tasks revert to "Pending Approval" within 60 seconds. And any attempt to execute a not‑approved task returns a 403 error with message "Approval required". And each task status change creates a per‑task audit entry with task ID, previous status, new status, actor, and timestamp.
Immutable Audit Trail for Report Lifecycle
Given report lifecycle events occur (Generate, Edit, Share:Link, Share:Email, View, Download, Approve, Decline, Revoke, Expire, TaskSync), when any such action happens, then the system records an immutable audit entry with actor ID or "External Viewer", action, ISO 8601 UTC timestamp, channel, IP/User‑Agent for external actors, and related object IDs. And audit entries are append‑only; delete and update operations are disallowed at the storage layer and attempts return 405. And authorized users can export the last 1,000 entries to CSV and PDF within 10 seconds. And the audit UI lists entries in chronological order and supports filtering by action type and date range.
Data Completeness, Disclaimers, and Fallbacks
Given a report includes tasks with missing cost or impact, when the agent views the report, then affected tasks are labeled "Data Needed" and excluded from totals. And the report displays a disclaimer "Totals exclude X tasks missing data" where X equals the number of excluded tasks. And the Share actions are disabled until at least one task has both cost range and impact; attempting to share shows a validation message explaining the requirement. And currency, percentage, and number formats follow the listing locale; currency rounds to two decimals and showing counts are whole numbers.
Access Control and Share Link Security
Given a share link exists, when it is opened after its expiration time, then the viewer receives HTTP 410 Gone and cannot access the content. And revoking a link immediately invalidates it; subsequent access attempts receive HTTP 410 Gone. And when optional passcode protection is enabled, viewers must enter the correct passcode; five failed attempts lock access for 15 minutes. And email shares generate recipient‑specific tokens that limit access to the addressed recipient; mismatched tokens return HTTP 403. And share links are noindex and use long, unguessable tokens to prevent discovery by unauthenticated users or crawlers.
Prediction Validation & Backtesting
"As a product manager, I want ongoing validation of ROI predictions so that we maintain trust and improve accuracy over time."
Description

Instruments listings to measure actual versus predicted showing lift and DOM reduction by task, runs holdout and A/B tests where feasible, and computes calibration and error metrics by market and segment. Feeds results to monitoring dashboards and retraining pipelines, and adjusts UI confidence indicators based on recent accuracy to maintain trust.

Acceptance Criteria
Listing Instrumentation for Actual vs Predicted Outcomes
Given a listing has ROI Gauge predictions per auto-task, when showings are recorded and the listing closes or is withdrawn, then the system stores actual total showings and final DOM with timestamps and market/segment metadata. Given showing events arrive from multiple sources (calendar, QR scans, manual), when ingested, then they are deduplicated using deterministic keys and time-proximity rules (<15 minutes) to yield unique showings with a ≤1% duplicate rate. Given an auto-task execution timestamp, when attributing lift, then a configurable attribution window per task type (e.g., 72h default) is applied and persisted, and the applied window is visible in audit logs. Given new showing events within the attribution window, when predictions exist, then per-task records of predicted lift vs actual incremental showings and DOM delta are computed within 15 minutes of event arrival. Then data completeness SLO is met: ≥99% of showing events ingested and reconciled within 60 minutes; any exceptions are flagged with reasons and surfaced in data quality reports.
Backtesting and Holdout Evaluation by Market/Segment
Given ≥500 eligible historical listings per market/segment, when the nightly backtesting job runs, then it performs time-based rolling-origin evaluation with last 20% as holdout and 5-fold cross-validation on the remainder. Then metrics are computed per task and segment: MAE, RMSE for predicted showing lift; MAE for predicted DOM reduction; coverage of 80% prediction intervals; and stored with dataset/model version hashes. Given markets with sufficient volume, when assignment data allows, then A/B or blocked quasi-experiments are executed with power ≥0.8 for a standardized effect size d=0.2, and p-values and confidence intervals are logged. Then all backtest artifacts (configs, seeds, feature schema) are versioned and reproducible, with reruns matching prior metrics within tolerance (±1% relative).
Calibration and Error Monitoring Thresholds
Given ≥50 realized outcomes per segment in the last 30 days, when calibration is computed, then reliability curves and calibration slope/intercept are produced per task and segment and stored. Then Expected Calibration Error (ECE) ≤0.10 and 80% PI coverage within 75–85% for a segment are considered in-spec; else the segment is marked out-of-spec with an alert created. Given weekly metric trends, when MAE or MAPE increases by >20% week-over-week for any task in a segment, then an alert with segment, task, and trend details is sent to the monitoring channel within 1 hour of detection.
Accuracy Monitoring Dashboard & Drilldowns
Given an authenticated broker-owner or listing agent, when they open Monitoring → Prediction Accuracy, then they can filter by market, segment, task, and time window, and see MAE, RMSE, MAPE, PI coverage, and calibration slope with 7/30/90-day trends. Then dashboard tiles render within ≤3 seconds for datasets up to 100k aggregated rows, and exports (CSV) complete within ≤30 seconds for the selected filter. Given a metric tile, when clicked, then a drill-down view shows listing-level residuals, attribution windows, and links to audit logs for any out-of-spec segments. Then dashboard data freshness is ≤24 hours, with last-refresh timestamp displayed.
Dynamic UI Confidence Indicator Adjustment
Given segment-level recent accuracy (last 30 days), when rendering ROI Gauge for a listing in that segment, then the confidence badge maps to thresholds: High (MAPE ≤10% and PI coverage 75–85%), Medium (MAPE ≤20%), Low (else), and the mapping is unit-tested. Then a tooltip explains the basis (e.g., “Based on last 30 days accuracy in [segment]”) and links to Monitoring. Given insufficient data (<30 outcomes) for a segment, when rendering, then the badge shows “Limited data” and the UI uses widened intervals (+50% of nominal width) with a grey state. Then UI updates the badge within ≤200 ms of loading the ROI Gauge component and logs the displayed confidence state.
Automated Retraining and Safe Deployment Gate
Given rolling 30-day performance, when MAPE exceeds the segment threshold for 2 consecutive weeks or alerts persist for >7 days, then a retraining job is enqueued with captured feature schema and training window parameters. Then the candidate model is evaluated on a time-held-out set; promotion requires statistically significant improvement (e.g., MAE reduction ≥5% with p<0.05) and non-degraded PI coverage; else it is rejected. Given promotion, when deployed, then the model registry increments version, changelog is created, and a canary rollout (≥10% traffic, 24h) is executed with automatic rollback if canary MAE degrades by ≥10% vs control. Then deployment status and new version are reflected in Monitoring and in ROI Gauge metadata within 1 hour.
Data Quality, Attribution Integrity, and Auditability
Given event ingestion, when duplicate payloads arrive, then idempotency keys prevent creating multiple showing events, ensuring a duplicate rate ≤1% as measured weekly. Given overlapping auto-tasks within attribution windows, when computing lift, then a predefined multi-touch attribution rule (e.g., Shapley or proportional) or exclusion rule is applied consistently and logged per record. Then records with missing critical fields (market, task type, timestamp) are rejected with errors surfaced; predictions associated with data-quality score <0.9 are excluded from backtests. Given any metric on a dashboard, when an auditor follows the audit link, then they can trace back to raw events, transformations, model version, and configuration used to produce the metric.

Vendor Match

Surfaces pre-vetted vendors matched to the objection type, location, and budget. Enables one‑click outreach and hold‑the‑slot scheduling from within the task to shorten time from decision to execution.

Requirements

Objection-to-Vendor Mapping Engine
"As a listing agent, I want clear vendor suggestions mapped to each buyer objection so that I can act immediately without researching categories or options."
Description

Establish a configurable rules engine that maps AI-summarized, room-level objections (e.g., scuffed paint, outdated lighting, pet odor, HVAC noise) to vendor categories and sub-specialties based on objection type, property characteristics, location, and budget inputs. Provide an admin-maintained taxonomy and mapping UI, default mappings, and safe fallbacks (e.g., general handyman) when specialty supply is thin. Integrate directly with the objection summary task so suggestions render inline without navigation, and support multi-objection tasks by recommending a bundled vendor set. Expected outcome: instant, relevant vendor suggestions that transform objection insights into actionable next steps, reducing agent decision friction and time-to-action.

Acceptance Criteria
Map Objection to Vendor Category and Sub-specialty
- Given a task containing a single objection with fields {type, room, severity} and listing inputs {propertyType, sqft, location (ZIP, state), budget}, When the mapping engine runs, Then it returns exactly 1 primary vendor category and 1 sub-specialty mapped to that objection. - Rule: The selected mapping must be derived from the latest published rule set and include a confidence score; Pass if confidence >= 0.70; Fail otherwise. - Rule: The mapping must respect budget by selecting sub-specialties whose median job cost is within ±10% of the input budget; if budget is unset, use the default cost band for the ZIP. - Rule: The mapping output includes a rationale string (max 240 chars) listing the rule IDs that fired.
Admin Manages Taxonomy and Rules
- Given an Org Admin on the Taxonomy UI, When they create/edit/delete a category or sub-specialty, Then the change is validated (unique name per org, unique slug, non-empty), saved, and appears in Draft without impacting live matches. - Rule: Only Org Admins can create/update/delete; Agents have read-only; unauthorized actions return 403 with error code TAXONOMY_FORBIDDEN. - Rule: Publishing creates a new version (version increments by 1), locks prior versions from edits, and live matching uses the latest Published version within 60 seconds. - Rule: All changes write to an immutable audit log {userId, timestamp, action, entityId, before, after} retrievable by date range. - Rule: A built-in test console allows saving at least 5 test cases; Publish is blocked until all saved tests pass against the Draft version.
Default Mappings and Organization Overrides
- Given a new organization, When the tenant is provisioned, Then default taxonomy and mapping rules are installed as Published with version tag "default-v1". - Rule: Precedence is Org Override > Default; on conflict, the more specific scope (sub-specialty > category) wins deterministically. - Rule: Admin can revert any overridden item to default via "Revert to Default"; action removes the override and re-links to the default item ID; success confirmed via toast and audit entry. - Rule: Import/Export supports JSON schema v1; invalid imports are rejected with a validation report enumerating all errors; no partial writes occur. - Rule: After publishing overrides, new matches reflect the change within 60 seconds; cache warms automatically without manual intervention.
Safe Fallbacks Under Thin Supply or No Rule Match
- Given a listing ZIP and budget, When evaluation yields fewer than 2 viable sub-specialties with at least 1 available vendor within 25 miles and budget fit within ±15%, Then suggest fallback category "General Handyman" (or "General Contractor" if handyman absent) and set reason="thin_supply". - Rule: If no rule matches the objection type/room, use the parent category fallback chain within 300 ms and set reason="no_rule_match". - Rule: Fallback suggestions must never be empty; if no vendors are available, show a "Concierge Assist" CTA and set reason="no_vendor_found"; Pass if CTA renders and action logs an assist request. - Rule: The UI displays a "Fallback applied" badge with a tooltip explaining the reason string; tooltip text is present and not empty.
Inline Suggestions in Objection Summary Task
- Given an agent opens an objection summary task, When the task loads, Then vendor suggestions render inline beneath each objection without navigation and without a full-page reload. - Rule: P95 time from task open to first suggestion render is ≤ 700 ms for cached mappings and ≤ 1200 ms for cold mappings; telemetry is captured per event. - Rule: Each suggestion shows {category, sub-specialty, top 3 vendors, distance, budget-fit indicator, confidence score, reason}; none of these fields are null/blank. - Rule: Editing budget or location in-place re-runs mapping and refreshes suggestions within 800 ms; stale suggestions are skeletonized until refresh completes; no console errors occur. - Accessibility: Interactive elements meet WCAG 2.1 AA (focus order correct, ARIA labels present, color contrast ≥ 4.5:1).
Multi-Objection Bundled Vendor Set
- Given a task with multiple objections across rooms, When mapping runs, Then the engine produces a bundled vendor set that covers all objections with the minimal number of vendors, preferring multi-trade vendors when available. - Rule: Duplicate categories/sub-specialties are deduplicated; per-vendor capacity is respected (default max 3 concurrent jobs/day unless vendor metadata specifies otherwise). - Rule: The bundle includes a combined estimate range and aggregated time-to-complete computed from vendor metadata; values display in the task and are never blank. - Rule: The UI provides "Contact All" and per-vendor actions; "Contact All" generates a pre-filled combined scope without error; if any vendor API times out, a draft outreach is generated and flagged without blocking others. - Performance: P95 time to compute a bundle for up to 10 objections is ≤ 1500 ms.
Vendor Vetting & Directory Sync
"As a broker-owner, I want only vetted, compliant vendors to surface to my agents so that client experience and liability are controlled."
Description

Create a centralized, pre-vetted vendor directory with fields for licensure, insurance, service area polygons, specialties, minimum job size, pricing bands, availability windows, response SLAs, references, and ratings. Implement ingestion via API/webhooks/CSV from broker CRMs and third-party marketplaces, plus a built-in vetting workflow (document capture, approvals, expirations, and reminders). Deduplicate vendors across sources, maintain status (Active, Probation, Suspended), and expose only compliant vendors to matching. Expected outcome: a trustworthy vendor pool that protects brand risk while ensuring coverage in target markets and budgets.

Acceptance Criteria
CRM/API/Webhook Ingestion and Field Mapping
Given a signed POST to /vendors/intake with an external_id and payload containing name, contact info, licensure, insurance, service_area (GeoJSON), specialties, minimum_job_size, pricing_bands, availability_windows, response_sla, references, and ratings When the payload is received Then the system upserts a single vendor record, persists all provided fields with correct data types, sets source and last_synced_at, and returns 201 Created with vendor_id on create or 200 OK on update And repeated requests with the same external_id and idempotency key do not create duplicates and only apply changes if the payload differs And invalid payloads missing name or contact (email or phone) are rejected with 400 and machine-readable error details And unknown fields are ignored without error and listed in a warnings array And P95 processing time per payload is <= 3 seconds
CSV Import with Mapping, Validation, and Partial Import
Given an admin uploads a CSV (<= 10,000 rows, UTF-8) and maps columns to vendor fields including name and contact (email or phone) When the import runs Then rows with required fields are ingested and upserted, preserving all mapped attributes (licensure, insurance, service_area, specialties, minimum_job_size, pricing_bands, availability_windows, response_sla, references, ratings) And rows failing validation are skipped without blocking the import and are listed in a downloadable error report with row numbers and reasons And a summary shows total_rows, imported_rows, skipped_rows, created_count, updated_count, and duration And deduplication rules are applied during import And P95 time to import 10,000 valid rows is <= 2 minutes
Third-Party Marketplace API Pull and Delta Sync
Given a connected marketplace integration authorized via OAuth2 with a stored refresh token When the hourly delta sync executes Then vendors created or updated since last_synced_at are fetched, normalized, and upserted into the directory And the sync is idempotent using provider_external_id to prevent duplicates And rate limits are honored and retries use exponential backoff with jitter up to 3 attempts And failures are logged with provider name, endpoint, and correlation_id, and surfaced in an admin sync report And last_synced_at is updated only on successful completion
Cross-Source Deduplication and Merge Rules
Given an incoming vendor that potentially matches an existing record When deduplication evaluates candidates Then a definite match is declared if any of the following are true: (license_number + issuing_state exact match) OR (tax_id exact match) OR (email exact match) OR (phone exact match AND normalized_legal_name similarity >= 0.90) And definite matches are auto-merged under a single vendor_id, preserving provenance per field and keeping the most recent non-expired documents And specialties, service areas, references, and contact methods are unioned without loss; ratings are re-aggregated deterministically And if best-match similarity is >= 0.70 and < 0.90 without a hard identifier, a potential duplicate is flagged for manual review and not auto-merged And no two Active vendor records may share the same (license_number + issuing_state); violations are blocked with a 409 Conflict And a merge audit log records old_ids, survivor_id, fields updated, and timestamps
Vetting Workflow: Document Capture, Approval, Expiration, and Reminders
Given a vendor uploads licensure and insurance documents with issue and expiration dates When an authorized reviewer approves both document types Then the vendor compliance checklist is marked complete and the vendor is eligible for Active status And reminders are sent to vendor and broker contacts at 30, 7, and 1 days before any document expiration And upon document expiration, the vendor is auto-transitioned to Suspended within 15 minutes unless renewed and re-approved And all actions (upload, approval, rejection, reminder sent, status change) are captured with actor, timestamp, and reason in an immutable audit trail And rejected documents require a reason and keep the vendor non-compliant until resubmission and approval
Status Model and Compliance Gating for Matching
Given a vendor directory record with status in {Active, Probation, Suspended} When the matching service queries for vendors for a listing Then only vendors with status Active and with non-expired licensure and insurance are eligible And the vendor’s service area polygon must contain the listing location And the vendor’s minimum_job_size must be <= the listing’s estimated budget and pricing_bands must overlap that budget range And vendors in Probation or Suspended are excluded with reason codes provided in the response And any status or compliance change is reflected in matching eligibility within 60 seconds
Service Area Polygon Validation and Geo-Matching
Given a vendor submits a service area geometry in GeoJSON or WKT When the geometry is validated Then only Polygon/MultiPolygon in EPSG:4326 with non-self-intersecting rings and valid winding are accepted; invalid geometries are rejected with a 400 and specific errors And the geometry is stored with simplification tolerance configurable but preserving topology Given a listing with resolved latitude/longitude When the containment check runs Then the result is true if the point lies within the vendor polygon (including boundaries); otherwise false And P95 point-in-polygon evaluation time is <= 100 ms for polygons up to 5,000 vertices
Multi-Factor Matching & Scoring
"As a listing agent, I want the best-fit vendors prioritized for my specific property and budget so that outreach leads to fast acceptances and fixes."
Description

Implement a scoring model that ranks vendors per task using objection-to-category fit, travel time/distance to property, budget alignment, current availability, historical performance (accept rate, completion time, quality), and brokerage preferences. Provide transparent "Why this vendor" rationales and tunable weights via feature flags for experimentation. Handle edge cases (no exact matches) with graceful degradation (nearby categories or broader radius) and guardrails (budget ceiling, compliance). Expected outcome: top-N suggestions that consistently convert to booked work with minimal manual curation.

Acceptance Criteria
Top‑N Vendor Ranking by Weighted Multi‑Factor Score
Given a task with objection category, property location, budget ceiling, requested time window, and brokerage preferences, and a vendor catalog with category coverage, service radius, price bands, live availability, and historical metrics, and a feature flag configuration with factor weights that sum to 1.0 When the scoring service is invoked for top N vendors Then each eligible vendor receives a normalized score in [0,1] combining: category fit, travel time/distance, budget alignment, current availability, historical performance, and brokerage preferences; and results are sorted by descending score with deterministic tie‑breakers (higher accept rate, then lower travel time, then vendor_id ascending); and exactly N vendors are returned when at least N eligible exist (else return all eligible); and each result includes total score, rank, and factor subscores; and P95 scoring latency is ≤ 500 ms for ≤ 100 candidates
Explainable 'Why This Vendor' Rationale
Given a ranked vendor result is viewed in the UI or retrieved via API When the user opens the rationale or reads the rationale field Then the response shows the top 3 contributing factors with human‑readable labels and values (e.g., “7 min away”, “$180 under budget”, “High accept rate 92%”); and an expandable/full breakdown lists all factor contributions and their weights; and the contributions numerically reconcile with the computed score within ±0.01; and the payload includes weights_version and computation timestamp
Graceful Degradation When No Exact Category Match
Given no vendor exactly matches the task category while respecting guardrails When the system searches for alternatives Then it expands in stages: (1) include mapped nearby categories from the category adjacency table; (2) widen search radius by 5/10/20 miles up to a max of 50 miles; (3) relax budget alignment by up to +10% only if allow_budget_flex is true; and at every stage the budget ceiling and compliance rules remain enforced; and degraded matches are labeled with a degradation_reason; and if zero results after all stages, return 200 with an empty list plus a guidance message and link to request a vendor
Enforce Budget Ceiling and Compliance Preferences
Given brokerage preferences and compliance constraints are configured When candidates are evaluated for eligibility Then vendors on the denylist or outside permitted jurisdictions are excluded; and when preferred_only is true, only allowlist vendors are eligible; and any vendor whose minimum quote exceeds the task budget ceiling is excluded; and the API includes filtered_out_counts by reason; and outreach/scheduling attempts to excluded vendors are blocked with HTTP 403 and a descriptive error code
Tunable Weights and Experiment Buckets
Given feature flags define per‑brokerage weight sets and experiment buckets When a scoring request is processed Then the active weight set is loaded by brokerage_id and bucket, validated as numeric, normalized to sum 1.0, and falls back to default v0 on error; and configuration changes take effect within ≤ 60 seconds; and the response includes weights_version and bucket_id; and impression/click/outreach/booking events are emitted with weight and bucket metadata; and bucket assignment is sticky per listing_id for 30 days
Availability‑Aware Ranking and Hold‑the‑Slot
Given a requested time window and auto_hold is enabled When the top vendors are computed Then only vendors with overlapping availability receive a positive availability subscore; and soft holds are created for up to the top 3 vendors with a 15‑minute expiration and idempotency via client_token; and hold creation failures are logged while keeping the vendor eligible with availability subscore set to 0 and rationale indicating slot hold failed; and bookings must reference a valid, unexpired slot_id; and P95 hold creation latency is ≤ 700 ms
Historical Performance Scoring and Fallbacks
Given historical metrics are available and refreshed daily When computing the performance subscore Then accept rate, completion time, and quality rating are calculated over the last 90 days with a minimum sample size of 10; and if below sample size, scores are shrunk toward the global prior; and vendors with no history receive a neutral performance subscore; and outliers beyond 3 standard deviations are winsorized; and the performance contribution to the total score equals the configured weight
One-Click Outreach & Slot Hold
"As a listing agent, I want to reach a top vendor and hold their earliest slot with one click so that I don’t lose momentum after a showing."
Description

Enable single-click outreach from within the objection task that sends templated emails/SMS/in-app messages with property context and requested time windows to selected vendors. Integrate with vendor calendars (Google, Outlook, iCal ICS) to place temporary holds on eligible slots, auto-expiring if not confirmed within a set SLA. Capture all communications in the task timeline, support reply-to-threading, and allow agents to confirm or release holds without leaving TourEcho. Expected outcome: compressed time from decision to vendor acceptance, with clear visibility into pending and confirmed appointments.

Acceptance Criteria
Single-Click Outreach Sends Templated, Context-Rich Message
- Given an agent is viewing an objection task with at least one selected vendor and valid contact details, when the agent clicks “Outreach & Hold,” then a templated message is sent within 3 seconds via the vendor’s preferred channel (email/SMS/in‑app). - And the message includes property address, MLS ID, objection type, budget range, requested time windows, agent name/contact, and a unique thread ID. - And personalization tokens render without placeholders; if any required token is missing, the send action is blocked with a clear validation message. - And the send action is recorded in the task timeline with timestamp, channel, and “Sent” delivery status.
Calendar Hold Creation on Eligible Vendor Slot
- Given the vendor has an active calendar integration (Google/Outlook/ICS) and service duration is known, when outreach is sent with requested time windows, then TourEcho creates a tentative hold on the earliest eligible slot within those windows. - And the event title is formatted as “[Hold] <Property Address> — <Service>,” location is the property address, and description contains the thread ID and task link. - And the event is marked tentative (or equivalent freeBusy=busy) and contains a unique UID to prevent duplication. - And the hold appears in the task with start/end time, provider, and a “Pending Hold” status badge.
Auto-Expiration of Unconfirmed Holds by SLA
- Given a hold remains unconfirmed, when the configured SLA elapses (default 2 hours; configurable 15 minutes–24 hours), then the hold is deleted from the vendor calendar and marked “Expired” in the task. - And the vendor receives an auto-expire notification on the original outreach thread. - And the timeline logs the auto-expiration with timestamp and links to the original hold event. - And SLA changes apply only to new holds and are auditable in settings history.
Threaded Timeline with Cross-Channel Reply Handling
- Given messages are sent with a unique thread ID, when the vendor replies via email/SMS/in‑app, then the reply is appended to the same task thread in chronological order. - And per-message delivery/read status is displayed when available for the channel. - And attachments up to 25 MB are captured and accessible from the thread. - And the agent can reply in-thread without leaving the task, with the reply delivered on the active channel of the last vendor message.
Agent Confirmation or Release of Holds In-App
- Given a pending hold is visible in the task, when the agent clicks Confirm, then the calendar event is updated to confirmed/firm, notifications are sent to the vendor, and the task status updates to “Confirmed.” - And any overlapping holds for the same task are automatically released with vendor notifications and timeline entries. - When the agent clicks Release, then the tentative hold is removed from the vendor calendar, the vendor is notified, and the task shows “Released.” - And all confirm/release actions are recorded with user, timestamp, and outcome.
Failure Handling and Fallbacks for Messaging and Calendar
- Given a calendar API call fails or times out within 5 seconds, when a hold cannot be created, then TourEcho retries up to 3 times with exponential backoff and offers an ICS hold attachment via the outreach channel. - And if the preferred message channel fails, a fallback channel is attempted based on vendor preferences; if all fail, the agent is alerted with a retry option and error details. - And all failures and retries are logged in the timeline with correlation IDs for support. - And partial successes (message sent but hold not created) are clearly indicated with next-step prompts.
Multi‑Vendor Outreach and First‑Accept Wins
- Given multiple vendors are selected and meet budget/location constraints, when outreach is initiated, then messages are sent concurrently to up to 3 vendors (configurable) with independent holds per vendor. - And the first vendor to accept/confirm automatically converts their hold to confirmed and triggers automatic release of other active holds with notifications. - And the task displays per‑vendor status badges (Sent, Hold Pending, Confirmed, Released, Expired) and prevents double‑booking of the same time slot for the same property. - And rate limiting ensures no vendor receives more than one outreach per task within 15 minutes.
In-Task Scheduling & Calendar Integration
"As a listing coordinator, I want to book and update vendor appointments inside the task so that everyone’s calendars stay in sync automatically."
Description

Provide an embedded scheduling widget that displays real-time vendor availability, agent and seller calendars, and recommended times, supporting multi-party coordination (agent, seller, tenant) with RSVP links. On confirmation, write events to all parties’ calendars, manage time zones, send reminders, and offer reschedule/cancel flows with policy-aware fees. Maintain a full audit trail of changes, and sync status back to the task and listing timeline. Expected outcome: frictionless booking that keeps everyone aligned without switching tools.

Acceptance Criteria
Time‑Zone‑Aware Multi‑Party Availability & Recommendations
Given the agent, seller/owner (or tenant), and selected vendor have connected calendars (Google, Microsoft 365/Outlook.com, or Apple iCloud) and the listing’s timezone is set When the in‑task scheduling widget is opened for that listing Then overlapping availability across all parties for the next 14 days is computed excluding conflicts and vendor blackout windows and the first 10 recommended slots are displayed within 2 seconds, localized to the agent’s timezone and labeled with each party’s local time And if fewer than 3 overlapping slots exist, the widget displays the nearest alternatives with a reason (e.g., "tenant unavailable") And availability refreshes automatically within 60 seconds when any party updates their calendar
Hold‑the‑Slot from Task
Given a recommended slot is visible in the scheduling widget When the agent clicks "Hold Slot" Then a 15‑minute temporary hold is placed on the vendor’s availability for that slot and the task shows a visible countdown timer And other users attempting to book that vendor for the same slot see it marked as "Held" with remaining hold time And the hold auto‑releases and the UI updates if not confirmed before expiry And if the vendor’s calendar provider does not support holds, a tentative "[HOLD]" event is created and auto‑removed on release
One‑Click Booking Writes to All Calendars & Sends Reminders
Given a selected slot (held or unheld) and all required parties are designated When the agent clicks "Confirm Booking" Then confirmed events are created for all parties with consistent title, description (including RSVP link and policy summary), location, and a shared UID in the correct timezones And if direct calendar write fails for any party, an ICS invite is emailed and the task is flagged "Attention Required" until acknowledged or retried And default reminders are scheduled for all parties at 24 hours and 1 hour before the event in their local time without duplication And the task status updates to "Booked" and the listing timeline shows the booking entry within 10 seconds
RSVP Links Update Attendance and Timeline
Given invites were sent with unique RSVP links to all attendees When any invitee clicks Accept, Decline, or Maybe Then the attendee status updates in the calendar event and in the task and listing timeline within 60 seconds And if any required attendee declines, the task suggests the next three overlapping slots And the RSVP link requires no login, expires at event start, and prevents multiple conflicting responses by the same invitee
Policy‑Aware Reschedule Flow
Given a booked event and a vendor reschedule policy (e.g., cutoff window and fee) is configured When the agent initiates Reschedule and selects a new recommended slot Then the UI displays any policy‑triggered fee and requires explicit confirmation if a fee applies And upon confirmation, the existing calendar events are updated in place to the new time for all parties, preserving the event UID, and notifications are sent And the task status changes to "Rescheduled" and the listing timeline records the change within 10 seconds And if the policy disallows rescheduling (outside window), the action is blocked and the reason is shown
Policy‑Aware Cancellation Flow
Given a booked event and vendor cancellation policy is configured When the agent clicks "Cancel" and confirms Then the policy logic computes any fee and displays it prior to confirmation And upon confirmation, calendar events are canceled/removed for all parties and notifications are sent And the task status changes to "Cancelled" and the listing timeline records the cancellation and fee outcome within 10 seconds And if cancellation is within a non‑cancelable window, the action is blocked and the reason is shown
Immutable Audit Trail of Scheduling Changes
Given any scheduling action occurs (hold, book, RSVP change, reminder send, reschedule, cancel, failure) When the action completes Then an immutable audit record is appended capturing timestamp (UTC), actor, action type, affected parties, before/after values, policy evaluation details, and correlation ID And the audit log is visible from the task and listing timeline, filterable by action type and exportable (CSV/JSON) And audit entries are created within 5 seconds of the action and cannot be edited or deleted by end users
Consent, Privacy & Audit Trail
"As a broker compliance lead, I want consented, auditable data sharing with vendors so that we reduce legal exposure while enabling fast execution."
Description

Add consent capture and data minimization gates before sharing seller contact info or property access details with vendors. Enforce role-based access, redact sensitive data in outreach templates by default, and comply with TCPA for SMS and regional data protection laws. Encrypt data in transit/at rest, log all access and data shares with immutable audit trails, and provide export-on-request for compliance reviews. Expected outcome: compliant, auditable vendor communications that maintain client trust and reduce regulatory risk.

Acceptance Criteria
Consent Gate Before Vendor Outreach
Given an agent initiates one-click outreach to a vendor that would share seller contact or access details When consent for the intended channel and data category is absent or expired Then the system blocks send, displays required disclosures, and requires explicit consent capture with timestamp, channel, scope, and agent identity before continuing Given valid consent exists for the intended channel and data category When the agent confirms send Then only the minimum necessary seller data is included and the consent reference ID is attached to the message metadata and audit log Given a seller revokes consent When any pending or future outreach is attempted Then the system prevents send and instructs the agent to choose a compliant alternative channel
TCPA-Compliant SMS Outreach
Given an SMS is being composed to a US phone When the message is previewed or sent Then the preview includes business identification and opt-out instructions ('Reply STOP to opt out') and the send engine enforces quiet hours of 8am–9pm recipient local time, DNC scrubbing, and mobile number detection, failing the send with an actionable error if non-compliant Given a recipient replies STOP When subsequent SMS are queued Then no further SMS are sent to that number until renewed consent is captured and the opt-out is reflected within 60 seconds across systems Given TCPA consent is required When the agent attempts to proceed without express written consent for marketing content Then the system blocks send and offers a compliant transactional template or alternate channel
Regional Data Protection Enforcement
Given the property or seller is located in the EU/EEA or UK When vendor outreach would transmit personal data Then the system requires selection of lawful basis, records GDPR consent where applicable, ensures processor agreements are on file, and masks personal data in vendor views until lawful basis is recorded Given the user toggles Global Privacy Control or a CCPA 'Do Not Sell/Share' signal is present When outreach content is generated Then no data is shared with vendors for cross-context behavioral purposes and the signal is recorded in the audit log Given a data subject access or deletion request is logged When an admin requests compliance export Then all matching records are discoverable by subject identifier within the export
Role-Based Access & Field-Level Controls
Given a user without the 'PII.View' permission opens a Vendor Match task When viewing seller fields Then phone numbers, emails, and access codes are masked, copy/download actions are disabled, and access is denied with a 403 and audit entry Given a user with 'PII.View' permission When accessing seller data Then access is granted, but viewing access codes requires 'AccessCodes.View' with reason selection; reason is logged Given a vendor account views the task When opening the vendor portal Then only property address and scheduling windows are visible by default; seller identity and contact details remain hidden unless explicitly shared with consent
Default Redaction & Secure Secret Sharing in Outreach Templates
Given an outreach template contains placeholders for sensitive fields (seller phone, lockbox code) When the agent previews the message Then sensitive placeholders are redacted by default and replaced with a secure expiring link (minimum 24 hours, maximum 7 days) if sharing is enabled Given the agent selects 'Include access code' When confirming send Then a justification is required, the code is delivered only via the secure link, never in clear text, and the link requires vendor authentication and logs first access Given the message is logged When viewing message history Then sensitive values are never stored in clear text and only salted hashes or redaction tokens appear in logs
Encryption in Transit and At Rest
Given any data exchange between client, server, and vendor endpoints When transport is established Then TLS 1.2+ with modern ciphers is enforced; non-TLS requests are redirected or rejected and logged Given data at rest in databases and object storage When the system writes or backs up records Then AES-256 class encryption is enabled, keys are managed by KMS with rotation at least every 90 days, and key access is restricted and auditable Given application logs and message queues When sensitive fields would be written Then secrets are redacted or tokenized before write
Immutable Audit Trail & Compliance Export-on-Request
Given any access to seller PII or any data share to a vendor When the event occurs Then an append-only audit record is written including actor ID, role, target record IDs, action, fields scope, timestamp (UTC), IP/device, region, consent reference, and a tamper-evident hash chain Given an admin initiates a compliance export for a listing or seller When the request is submitted Then a machine-readable (JSON) and human-readable (PDF/CSV) export is generated within 24 hours containing consents, outreach messages (redacted), vendor shares, and audit logs, along with an integrity checksum Given an attempt to alter or delete an audit record When the system validates the log Then the attempt is blocked, an alert is raised to compliance admins, and the event is logged
Outcome Analytics & ROI Attribution
"As a product manager, I want to see how Vendor Match affects booking speed and days on market so that we can optimize matching and vendor supply."
Description

Instrument the Vendor Match funnel end-to-end to measure time from objection to outreach, hold placement, confirmation, job completion, and impact on listing KPIs (follow-up time, price reductions avoided, days on market). Provide dashboards, cohort comparisons, and CSV export, plus attribution tags linking specific fixes to objection resolution in subsequent showing feedback. Expected outcome: quantifiable proof that Vendor Match accelerates execution and improves market outcomes, guiding optimization and vendor curation.

Acceptance Criteria
End-to-End Funnel Instrumentation: Objection to Job Completion
- Given an objection is logged on a listing via TourEcho feedback, When the agent uses Vendor Match to initiate outreach, Then the system persists UTC timestamps for objection_logged_at and outreach_sent_at under a unique funnel_id. - Given the agent holds a vendor slot from Vendor Match, When the hold is created, Then slot_held_at is recorded and linked to the same funnel_id. - Given a vendor confirms the job, When confirmation is received (API/webhook/manual), Then vendor_confirmed_at is recorded with source and confidence. - Given the vendor marks work complete or the agent verifies completion, When the job is completed, Then job_completed_at is recorded. - Rule: Latency metrics t_objection_to_outreach, t_outreach_to_hold, t_hold_to_confirm, t_confirm_to_complete, t_objection_to_complete are computed in minutes, non-negative, and stored with minute precision. - Rule: Events are idempotent; duplicate webhooks do not create multiple records; late-arriving events backfill without breaking metric continuity.
KPI Impact & ROI Calculation
- Rule: For each job_completed_at, compute pre_window = [D-14, D) and post_window = [D, D+14) days. - Rule: Follow-up time delta = avg_followup_minutes(post_window) - avg_followup_minutes(pre_window) per listing and cohort; expose p50/p75. - Rule: Days-on-market impact = baseline_DOM(cohort-matched) - actual_DOM if listing sells within 60 days post-fix; else null. - Rule: Price reductions avoided = max(0, count(price_reduction in pre_window) - count(price_reduction in post_window)). - Rule: ROI summary displays median and p75 deltas by market, vendor category, and budget bucket and totals jobs analyzed (n). - Rule: All KPI computations are recomputed on late data and are reproducible via a versioned metric definition.
Objection Resolution Attribution via Feedback Tags
- Given a Vendor Match job is tagged with one or more objection categories and listing areas, When QR-coded feedbacks are submitted post job completion, Then the system links those feedbacks to the job if submitted within 30 days of job_completed_at. - Rule: A tagged objection is considered resolved when ≥3 post-fix feedbacks exist and the objection occurrence rate for that tag decreases by ≥50% versus the pre_window rate; status ∈ {resolved, improved, unchanged}. - Rule: Credit assignment uses last-touch within 7 days of the feedback timestamp; otherwise the feedback is marked unattributed. - Rule: Each attribution stores evidence (feedback_ids, before_rate, after_rate, computation_timestamp) and is visible in job detail and exports.
Analytics Dashboards with Cohort Comparisons and Drilldowns
- Rule: Provide filters: date range, market, listing, agent/team, vendor category, budget bucket, objection tag; all filters combinable. - Rule: Surface metric cards for median and p75 latencies and counts (#funnels, #holds, #confirmations, #completions). - Rule: Cohort compare toggle shows With Vendor Match vs Without Vendor Match matched by market, price band, and bedroom count, with uplift percentages and 95% CI; if sample size <30 per cohort, show "insufficient data". - Rule: Drilldown from any aggregate to a listing view showing the funnel timeline, KPIs, and attribution evidence. - Rule: Dashboard initial load ≤3s for queries returning ≤50k funnels; otherwise show loading state and run query asynchronously.
CSV Export of Metrics and Events
- Rule: Users can export Aggregated Metrics CSV and Events CSV from any filtered dashboard view; exports reflect active filters. - Rule: Events CSV schema includes: funnel_id, listing_id, objection_tag, event_name, event_timestamp_utc, event_source, vendor_id_hashed, agent_id_hashed, market, budget_bucket. - Rule: Aggregated CSV schema includes: date_granularity, filter_values, metric_name, metric_value, n, p50, p75, p90 where applicable. - Rule: Row counts and metric aggregates in export match on-screen values within rounding rules; timezone in exports is UTC. - Rule: Exports support pagination for >1M rows and optional gzip compression; delivered via download and emailed link expiring in 7 days.
Data Freshness, Backfill, and Accuracy Guarantees
- Rule: 95th percentile event-to-analytics latency ≤5 minutes; 99th percentile ≤15 minutes, monitored hourly with alerts on breach. - Rule: Late or out-of-order events are backfilled with automatic recomputation of affected metrics within 10 minutes of arrival. - Rule: Daily data quality checks: ≤1% funnels missing required timestamps; 0% negative durations; ≤0.5% timezone inconsistencies. - Rule: All metric computations are versioned; changes create a new metric_version and trigger an in-app banner prompting users to re-run comparisons.

Objection Playbooks

Turns common objection patterns into reusable task bundles with assignees, checklists, and messaging templates. Standardizes responses across teams and lets new users move fast with less training.

Requirements

Playbook Template Builder
"As a broker operations lead, I want to create standardized playbooks for common objections so that our agents can respond consistently and quickly."
Description

Provide an authoring and management interface to create reusable Objection Playbooks that encapsulate a named objection pattern, trigger criteria (keywords/tags/categories), task checklists with relative due dates, default assignees/roles, and messaging templates. Support cloning, versioning, tagging, and preview with sample listing/showing data. Persist a structured data model (Playbook: id, name, description, triggers[], tasks[], messages[], roles[], version, status, owner, visibility). Validate merge fields and task rules at save time. Integrate with TourEcho’s listing, showing, QR feedback, and AI summary objects so playbooks can be applied from listing inbox, feedback views, and automation rules. Optimize for low-friction authoring and reuse to standardize responses and reduce onboarding time.

Acceptance Criteria
Create & Save New Playbook with Validation
- Given I enter a unique name and define at least one of tasks[] or messages[], When I click Save, Then the playbook is persisted with fields {id, name, description, triggers[], tasks[], messages[], roles[], version, status, owner, visibility}. - Given message templates contain merge fields, When I click Save, Then save succeeds only if all merge fields resolve against Listing, Showing, Feedback, and AISummary schemas; otherwise save is blocked and a list of invalid fields and their locations is shown. - Given tasks[] include relative due dates and optional dependencies, When I click Save, Then save succeeds only if due date rules match supported syntax (D+N, H+N, W+N) and dependencies reference existing task ids; otherwise save is blocked with inline errors. - When save succeeds, Then version is set to 1, status is set to "draft", owner is the author, visibility is required and persisted. - Then the saved playbook is retrievable via API and UI list within 2 seconds of save.
Clone Existing Playbook
- Given a draft or active playbook exists, When I select Clone, Then a new playbook is created with a new id, name prefixed with "Copy of ", version = 1, status = "draft", and the same description, triggers[], tasks[], messages[], roles[], tags, owner, and visibility as the source. - When the clone is saved or edited, Then no changes affect the source playbook. - Then all merge fields and task rules in the clone are re-validated on save and must pass the same validations as originals.
Versioning: Edit & Publish New Version
- Given an active playbook v1 exists, When I click Edit, Then a new draft version v2 is created with content copied from v1 and v1 remains active and immutable. - When I publish the draft, Then v2 becomes the active version and v1 is retained for historical reference; existing applications of v1 continue to reference v1. - When a draft has validation errors (merge fields or task rules), Then Publish is disabled until all errors are resolved. - When I attempt to delete an active version that has been applied, Then deletion is blocked and Archiving is offered instead.
Preview with Sample Listing/Showing/Feedback Data
- Given I select sample Listing, Showing, QR Feedback, and AI Summary data, When I click Preview, Then all messaging templates render merge fields with sample values and the render completes within 1 second for templates under 2,000 characters. - When a merge field has no matching data, Then the preview displays a bracketed placeholder and highlights the field; a non-blocking warning lists missing fields. - When I enter test text and tags to simulate triggers, Then the preview indicates which triggers[] would match and why (matched keywords/tags/categories). - Then the preview displays the number of tasks that would be created and shows calculated due dates based on relative rules from the preview apply time.
Apply Playbook from Inbox, Feedback View, or Automation Rule
- Given a listing inbox item, a feedback detail view, or an automation rule fires, When I apply a selected playbook, Then tasks are created with due dates calculated relative to the apply timestamp, default assignees resolved from roles[], and messages[] are staged to the configured channels. - Then the application is logged with context (listingId/showingId/feedbackId/ruleId) and appliedPlaybookVersion, and the user receives a success confirmation within 2 seconds. - Then the created task bundle and message drafts are accessible from the originating context via a deep link.
Role Mapping & Assignee Resolution
- Given roles[] are defined in the playbook, When I publish the playbook, Then each role must be mapped to at least one team role or a specific fallback user; otherwise Publish is disabled with a mapping error list. - When a playbook is applied and no user matches a role, Then the corresponding tasks are assigned to the playbook owner and flagged for reassignment. - When multiple users match a role, Then the UI prompts the applier to select an assignee before confirming application.
Search, Tagging, and Visibility Controls
- Given playbooks have names and tags, When I search by name, tag, or trigger keyword, Then the builder list returns matching playbooks ranked with exact matches first. - When visibility = "Team", Then members of the owner’s team can view and apply the playbook; non-members cannot see it in lists or search. - When visibility = "Private", Then only the owner can view and apply; attempts by others return a permission error. - When visibility is changed, Then the change is applied immediately and reflected in list/search results within 10 seconds.
AI Objection Detection & Auto-Suggest
"As a listing agent, I want TourEcho to detect objections and suggest the right playbook so that I don’t miss critical follow-ups."
Description

Map AI-summarized sentiment and room-level objections into a normalized taxonomy and match them to eligible playbooks in real time when new feedback arrives. Compute confidence scores per match, display suggestions inline in the listing inbox with explanation highlights, and allow one-click apply. Provide team-level controls for confidence thresholds, auto-apply toggles, and notification preferences. Ensure latency under 2 seconds from feedback ingestion to suggestion, idempotent application, and audit logging of detection events and overrides. Fallback supports manual search/browse of playbooks when no confident match exists.

Acceptance Criteria
Real-Time Objection Matching with Confidence Scores
Given new AI-summarized feedback with sentiment and room-level objections for a listing When the detection service processes the event Then each objection is mapped to a normalized taxonomy term (returns term_id, term_label, taxonomy_version) And only playbooks eligible by team scope and listing attributes (market, brand, property type, permissions) are considered as candidates And each candidate match includes playbook_id, matched_term_id, confidence in [0.0,1.0] rounded to two decimals And only matches with confidence >= the team-defined threshold are marked as suggestable And matches below threshold are hidden from the suggestion list unless the user opens manual browse
Inline Suggestion Rendering with Explanation Highlights
Given there are one or more suggestable matches for a listing When the user views the listing inbox thread for the feedback item Then suggestions render inline within the message item containing playbook title, confidence as a percentage, and Apply action And each suggestion displays at least 3 explanation highlights that reference exact feedback phrase spans by character offset and length And hovering or tapping a highlight reveals the matched taxonomy term label and reason code (e.g., phrase match, semantic similarity) And no layout shift greater than 0.1 CLS occurs on render, and time-to-first-suggestion is under 2,000 ms from feedback ingestion
One-Click Apply with Idempotent Execution
Given a suggestable playbook is visible in the inbox When the user clicks Apply Then the playbook bundle is instantiated once with its assignees, checklists, and messaging templates attached to the listing within 1,000 ms And a success toast appears within 500 ms confirming the playbook name and number of tasks created And repeating Apply on the same suggestion or receiving a duplicate detection event does not create duplicate tasks or messages (idempotent via detection_event_id and playbook_id deduplication) And the suggestion transitions to an Applied state and is removed from actionable suggestions
Team-Level Controls for Thresholds, Auto-Apply, and Notifications
Given a team admin opens Objection Playbooks settings When the admin sets the confidence threshold between 0.0 and 1.0 in increments of 0.05 and saves Then the new threshold persists and is enforced for new feedback events within 60 seconds When the admin toggles Auto-Apply on Then any future match with confidence >= threshold is applied automatically without user action And a notification is sent according to team preference (email, in-app, or none) within 60 seconds of application When the admin changes notification preference Then subsequent events follow the updated preference; prior notifications are unaffected
End-to-End Latency From Ingestion to Suggestion
Given a new QR-door-hanger feedback submission is received When the system ingests and processes the feedback Then the time from ingestion timestamp to first inline suggestion render is <= 2,000 ms for at least 95% of events measured over the most recent rolling 100 events per team And the median latency over the same window is <= 1,200 ms And if model calls exceed timeout budget, a degraded path returns the UI frame with a manual browse prompt within 2,000 ms
Audit Logging of Detection Events and Overrides
Given detection runs for a feedback event When matches are computed and suggestions are shown or applied Then an immutable audit record is written containing request_id, detection_event_id, listing_id, taxonomy_version, input_hash, normalized terms, candidate playbooks with confidence, threshold at time of decision, auto-apply flag, actor_id (system or user), and timestamp When a user applies a different playbook, dismisses a suggestion, or changes team settings (threshold, auto-apply, notifications) Then an audit entry records before/after values, actor_id, timestamp, and reason (apply, dismiss, settings_change) And admins can export audit logs for a selectable date range to CSV, and results include at least 1,000 rows per export
Fallback Manual Search/Browse When No Confident Match
Given no candidate matches meet the team threshold or detection fails gracefully When the user opens the suggestion panel Then a manual search/browse control is available within 300 ms offering filters by taxonomy term, room, playbook tags, and text search-as-you-type And selecting a playbook from manual browse enables Apply using the same idempotent path and writes an audit entry with reason=manual_apply And if sub-threshold candidates exist, a Low confidence section lists up to 5 with confidence badges and an Adjust threshold control (changes do not apply retroactively)
Task Bundle Orchestration
"As a team lead, I want task bundles to auto-assign to the right people with deadlines so that nothing falls through the cracks."
Description

When a playbook is applied, instantiate a scoped task bundle with checklists, relative due dates, and owners assigned via role-based rules (e.g., Listing Agent, Assistant, Vendor). Support task dependencies, SLA timers, reminders, and reassignment. Sync task states with existing notifications and calendars; expose progress in the listing’s activity timeline. Provide batch update and pause/resume controls, and ensure mobile-friendly execution. Persist bundle state for reporting and allow safe re-application without duplicating completed tasks.

Acceptance Criteria
Apply Playbook: Scoped Bundle Instantiation & Role-Based Assignment
Given a listing with roles mapped (Listing Agent, Assistant, Vendor) and a selected playbook, When the user applies the playbook, Then a new bundle with a unique ID is created and scoped to the listing with all tasks and checklists instantiated. And Then task owners are resolved via role-based rules with defined fallbacks; any unresolved role blocks application with a clear error. And Then relative due dates are computed from the configured anchor (apply time or event) honoring business-calendar constraints where specified. And Then each task includes its messaging templates and metadata as defined in the playbook. And When a task is reassigned to another user or role, Then the owner updates immediately, an audit entry is recorded (who, when, from→to), and the new owner is notified.
Enforce Task Dependencies & Relative Scheduling Adjustments
Given a task with one or more predecessors, When all predecessors are completed, Then the task becomes unblocked and actionable; otherwise it cannot be started or completed and the UI displays the blocking reasons. And When a predecessor's due date shifts, Then dependent tasks' due dates recompute according to their relative offsets and constraints. And When a dependency is removed, Then the affected task's status and schedule recompute within 1 second.
SLA Timers, Reminders, and Escalations
Given a task with an SLA, When the task is created or assigned per configuration, Then its SLA timer starts and is visible in the task detail with remaining time. And When reminder thresholds are reached (e.g., 24h, 1h before due), Then reminders are sent via configured channels (in-app, email, push) once per threshold with no duplicates. And When the SLA breaches, Then the task is flagged as "SLA Breached", the breach timestamp is stored, and an escalation notification is sent to the configured role. And When the bundle is paused, Then SLA timers and reminders for all tasks in the bundle are suspended; When resumed, timers continue with remaining durations and reminders reschedule accordingly.
Sync to Notifications, Calendars, and Activity Timeline
Given a task with a due date and owner, When the task is created, Then a calendar entry is created on the owner's calendar with the correct title, due time, and link to the task. And When the task's due date, owner, or status changes, Then the calendar entry updates within 60 seconds; When the task completes, the entry is marked done or removed per setting. And Assignment, due-soon, overdue, and reassignment events generate a single deduplicated notification per channel with actionable links. And The listing's activity timeline displays bundle creation, task assignment, completion, pause/resume, and SLA breach events with timestamp, actor, and task/bundle references; progress is shown as percent completed and count of open vs total tasks.
Mobile Task Execution
Given a device viewport between 360px and 768px width, When the user opens a bundle, Then all task lists and details are fully usable without horizontal scrolling and with font sizes ≥ 14px. And Interactive controls (checklist toggles, complete, reassign, pause) have tap targets ≥ 44px and pass WCAG 2.1 AA contrast. And Task detail screens load in ≤ 2 seconds on a 4G connection (≤ 400ms server TTFB, ≤ 1.6s resource load) and actions provide feedback within 200ms. And Push/in-app notifications deep-link to the correct task view in the mobile UI.
Batch Update and Pause/Resume Orchestration
Given a selected set of tasks within a bundle, When the user applies a batch action (complete, postpone by X, reassign to Y, add tag), Then the change applies to exactly the selected tasks, records a single grouped audit entry, and emits the correct notifications. And When the bundle is paused, Then new task creation is halted, users cannot start blocked tasks, reminders/SLA timers are suspended, and a "Paused" badge appears on the bundle and tasks; When resumed, all are re-enabled with recomputed schedules. And Batch operations respect dependencies (e.g., cannot complete tasks with unmet predecessors) and surface a per-task success/failure summary.
Safe Re-application and Persistence for Reporting
Given a listing with an existing bundle from Playbook A, When Playbook A is reapplied, Then only missing or incomplete tasks are created; completed tasks are not duplicated, and existing active tasks are not reset unless explicitly confirmed by the user. And The bundle state (IDs, owners, statuses, due dates, SLA metrics, pause/resume history) is persisted and queryable via reporting endpoints/exports. And Reporting aggregates include counts of tasks by status, average time-to-complete, SLA breach rate, and time-in-state per listing and per playbook, matching the live bundle within 1% variance.
Messaging Templates with Merge Fields
"As an agent, I want pre-approved messaging templates with auto-filled details so that I can communicate faster without errors."
Description

Offer a library of pre-approved messaging templates across SMS, email, and in-app channels with support for merge fields (listing attributes, showing context, agent and buyer-agent details, dates) and conditional blocks based on objection category or severity. Validate variables at compose time, provide live previews with sample data, and log sends to the listing timeline. Integrate with existing communications providers (e.g., Twilio, SMTP), enforce opt-out and rate limits, and allow localized variants per market/office.

Acceptance Criteria
Template Library Across Channels & Access Control
- Given an admin user, when they create a messaging template, then they can select channel SMS, Email, or In‑App and set a required approval status. - Given a non-admin user, when browsing templates, then only approved templates for their permitted channels and office are visible and selectable. - Given a user filters by channel, when searching the library, then results only include templates matching the selected channel.
Merge Fields Resolution and Validation
- Given a selected template containing supported merge fields for listing, showing, agent, buyer-agent, and dates, when a valid compose context is provided, then all merge fields resolve to non-empty values from that context. - Given a template containing an unknown or misspelled merge field, when validating at compose time, then the system flags the token, provides the expected syntax, and blocks Send. - Given a required merge field resolves to empty due to missing context, when validating, then the field is highlighted with the missing data source and Send remains disabled.
Conditional Content by Objection Category/Severity
- Given a template with conditional blocks keyed on objection.category and objection.severity, when the compose context matches a rule, then the block is included in the rendered message; otherwise it is omitted without placeholders or artifacts. - Given a conditional expression that references an unsupported field or operator, when validating the template, then an error is shown with the failing rule and Send is blocked. - Given multiple matching conditional blocks, when rendering, then blocks are included in the defined order without duplication.
Live Preview with Sample Data and Channel-Specific Feedback
- Given a user opens Preview, when a template is selected, then a live preview renders with sample data replacing all merge fields and conditional blocks resolved for each channel. - Given the user switches the sample listing, showing, or persona, when previewing, then the preview updates within 500 ms and recalculates SMS segment count, email subject/body length, and in-app character counts. - Given an unresolved variable or rule, when previewing, then the preview pinpoints the location and displays the validation error.
Provider Integration, Opt-Out, and Rate Limits
- Given SMS channel and Twilio configured, when sending, then the system calls Twilio with the configured From number per office/market and records the provider message SID; on non-2xx responses, the send is marked Failed with the provider error code. - Given a recipient has opted out of a channel, when attempting to send on that channel, then the send is blocked locally with a clear reason and no provider API call is made. - Given configurable rate limits per agent and per provider, when the current rate exceeds the limit, then the message is queued or rejected per policy and the event is recorded in the audit log.
Timeline Logging and Auditability
- Given a message is successfully sent, when viewing the listing timeline, then an entry appears with timestamp (UTC), channel, recipients, sender, template ID and version, objection playbook reference, and first 100 characters of the message, with PII redacted per policy. - Given a provider delivery failure or retry, when events occur, then the timeline updates status from Pending to Sent or Failed and stores provider correlation IDs and error codes. - Given a user with appropriate permissions, when exporting timeline logs, then the export includes all send events and statuses for the listing for the selected date range.
Localized Variants per Market/Office
- Given a template defines localized variants by market and locale, when a user in market M with locale L selects the template, then the M/L variant is auto-selected; if missing, the default variant is used. - Given date/time and currency merge fields, when rendering localized variants, then formats follow the locale and listing market time zone. - Given a required localized variant is missing and no default exists, when attempting to send, then the send is blocked with an instruction to add a variant or choose a different template.
Team Sharing, Roles, and Approvals
"As a compliance manager, I want role-based controls and approvals for playbooks so that only vetted processes are used."
Description

Implement role-based permissions and library scoping so playbooks can be private, team-level, or brokerage-wide. Provide a Draft → Review → Published workflow with required approver roles, change tracking, and audit logs. Allow field locking within published playbooks, import of starter sets for new teams, and policy toggles that encourage or require use of standardized playbooks. Ensure visibility and execution rights align with office/team membership and SSO groups.

Acceptance Criteria
Library Scoping: Private, Team, Brokerage Visibility
Given a user with Editor role creates a playbook and selects Private scope When another user outside the owner searches or browses the library Then the playbook is not visible or executable to non-owners and is only visible to the creator and organization Admins Given a playbook scoped to Team A When a member of Team A opens the library or search Then the playbook is visible and executable to that member And when a non-member attempts direct URL access or API retrieval Then the response is 403 Forbidden and the playbook is not discoverable in search Given a playbook scoped to Brokerage When any authenticated user within the brokerage searches or browses Then the playbook is visible and executable And when an external organization user attempts access Then the playbook is not discoverable and access returns 404 in UI and 403 in API Given a scoped playbook When viewing its details Then the scope label and owning unit are displayed consistently in UI and API metadata
Role-Based Permissions and SSO Group Mapping
Given SSO is enabled with IdP groups mapped to TourEcho roles Executor, Editor, Approver, Admin When a user's IdP group membership changes Then on next login or within 5 minutes the user's effective role updates and permissions take effect without manual intervention Given a user with Editor role When they attempt to publish a playbook Then the action is blocked and the message Approver required is shown Given a user with Executor role who is not a member of the owning team When they attempt to execute a team-scoped playbook Then access is denied and the event is audit logged Given an Admin assigns a role override at team or office level When the assignment is saved Then the user's effective permissions reflect the override and the change is audit logged Given an API client with role claims When calling the list playbooks endpoint Then results are filtered to the caller's visibility and attempts to access out-of-scope items return 403
Draft → Review → Published Workflow with Required Approver
Given a new playbook in Draft When the Editor submits for Review Then status changes to Review and notifications are sent to Approvers in the owning scope Given a playbook in Review When an Approver approves Then status becomes Published, a version number is assigned, and the publish event is audit logged Given a playbook in Review When an Approver requests changes or rejects Then status returns to Draft and the approver's comment is captured and notified to the submitter Given a Published playbook When an Editor edits it Then a new Draft version is created and the Published version remains unchanged until the Draft is approved Given policy requires an Approver in the owning scope When no Approver is assigned Then submission to Review is blocked with an actionable error instructing assignment of an Approver
Change Tracking and Audit Log
Given any create, edit, scope change, role assignment, state transition, policy toggle, or execution event When it occurs Then an immutable audit record is written capturing actor, timestamp UTC, action, target object ID, and previous and new values for changed fields, including request origin UI or API Given a playbook with multiple versions When viewing change history Then a chronological list of version diffs is shown and comparing any two versions highlights field-level changes Given an Admin filters audit logs by date range and object type When exporting Then a CSV containing all matching records is generated with pagination for more than 10000 rows Given a non-admin user When attempting to access audit logs outside their visibility scope Then access is denied and the attempt is audit logged
Field Locking in Published Playbooks
Given a Published playbook When an Approver or Admin locks specified fields Then those fields are visually indicated as locked and are read-only in the Published version and in any execution context Given an Editor edits a Published playbook with locked fields When a new Draft version is created Then locked fields remain read-only in the Draft and may only be changed by an Approver or Admin in that Draft prior to publishing Given an execution runner attempts to override a locked field during run time When submitting the override Then the UI override control is disabled and API returns 403 with error code PLAYBOOK_FIELD_LOCKED Given a locked field is changed in a new version and the version is published When executing the playbook Then the updated locked value is used and historical executions retain their original values
Import Starter Sets for New Teams
Given a newly created Team library When an Admin selects Import Starter Set and chooses a bundle Then all playbooks in the bundle are cloned into the Team library in Draft state with ownership set to the Team and an import summary is displayed Given an imported playbook matches an existing playbook identifier in the Team library When import runs Then the duplicate is skipped and the summary lists skipped items with reasons Given the starter set playbooks include locked fields and approver requirements When imported Then locking and approver requirements are preserved and mapped to the Team's roles where possible and unmapped roles are flagged in the summary Given import completes When viewing the Team library Then imported items are searchable and filterable by tag Starter Set and import date
Policy Toggles to Encourage or Require Standardized Playbooks
Given an Admin sets policy Recommend standardized playbooks for an office When a user initiates an ad hoc custom playbook Then a non-blocking warning is shown and telemetry records the bypass with user and timestamp and optional reason Given an Admin sets policy Require standardized playbooks for a team When a user attempts to execute a non-standard playbook Then execution is blocked and the user is directed to select an approved playbook and the event is audit logged; Admins may bypass with justification which is recorded Given a policy setting is toggled When the change is saved Then the change is effective immediately for new sessions and within 5 minutes for active sessions and the change is audit logged Given policy Require standardized playbooks is active When an Editor attempts to publish a playbook that is not categorized as standardized and approved Then publishing is blocked until the playbook meets the policy and receives approval
Outcome Analytics & Optimization
"As a broker-owner, I want analytics on playbook effectiveness so that we can optimize our response strategies and improve days-on-market."
Description

Deliver dashboards and exports showing objection frequency, playbook usage, time-to-resolution, task completion rates, messaging response rates, and impact on follow-up time and days-on-market. Enable filtering by team, office, listing attributes, and time windows. Support A/B testing of playbook variants and surface recommendations for top-performing responses. Capture agent feedback ratings on effectiveness to inform continuous improvement while honoring data retention policies.

Acceptance Criteria
Outcome Dashboard Shows Key Metrics
- Given I am an authenticated user with access to the selected office/team and a chosen date range, When I open Outcome Analytics, Then I see tiles/charts for Objection Frequency, Playbook Usage, Median Time-to-Resolution, Task Completion Rate, Messaging Response Rate, Delta Follow-up Time, and Delta Days-on-Market. - Given identical filters, When I compare a dashboard metric to the corresponding export, Then the values match within defined rounding rules. - Given no qualifying data in the selected range, When I load the dashboard, Then all metric tiles display 0 and an empty-state message is shown. - Given listings excluded by the organization’s data retention policy, When metrics are computed, Then excluded listings are not included in any metric or visualization.
Filtering by Team, Office, Attributes, and Time Window
- Given I have permission to multiple teams/offices, When I apply Team, Office, Listing Attributes (e.g., price range, beds, city), and Date Range filters, Then all metrics, charts, and tables reflect only data matching those filters. - Given multiple filters are applied, When I remove or change one filter, Then results recompute to reflect the new filter set. - Given a filtered view, When I copy the URL and share it with another authorized user, Then opening the link pre-applies the same filters and shows the same results. - Given I navigate between analytics tabs, When I return to the Outcomes view, Then my filters persist for the current session.
CSV/Excel Export of Analytics
- Given filters are set, When I export as CSV or Excel, Then the file contains only rows and aggregates that match the current filters and scope. - Given a specific entity (e.g., listing, team) exists in the export, When I compare its metrics to dashboard drill-downs, Then values match within the same rounding rules and time zone. - Given data older than the retention policy is excluded, When I export covering that date range, Then excluded rows are omitted from the file. - Given special characters in names or notes, When exported, Then the file is UTF-8 encoded with a header row and properly quoted fields.
A/B Testing of Playbook Variants
- Given a playbook has two or more active variants, When A/B testing is enabled with a defined allocation (e.g., 50/50), Then new eligible objections are randomly assigned according to the allocation and the assignment is persisted per objection. - Given an active test, When viewing the A/B test results, Then the report displays per-variant sample size and outcome metrics (e.g., time-to-resolution, messaging response rate) and calculates observed lift. - Given I pause/end the test and select a winner, When I confirm, Then new assignments stop and the selected variant becomes the default for future objections. - Given test data is needed externally, When I export test results, Then each record includes variant, assignment timestamp, and outcome fields.
Recommendations for Top-Performing Responses
- Given the organization-configured minimum sample size and evaluation window are met, When the system evaluates playbook performance, Then a Recommendations panel ranks variants by impact on time-to-resolution and response rate and indicates confidence. - Given a recommendation is shown, When I click Apply as Default, Then the selected variant becomes the default and the action is audit-logged. - Given a recommendation is not desired, When I dismiss it, Then it is removed from view and will reappear only if new evidence changes the ranking. - Given insufficient data, When evaluation runs, Then no recommendation is produced and an explanatory message is displayed.
Agent Feedback Ratings Capture and Use
- Given an agent completes a playbook, When prompted, Then the agent can submit a 1–5 effectiveness rating with an optional comment. - Given a rating is submitted, When viewing analytics, Then average rating and distribution by playbook and variant are visible and filterable. - Given the organization’s data retention policy is set to N days, When a rating exceeds N days, Then the rating (and comment, if applicable) is deleted or anonymized per policy and excluded from analytics and exports. - Given ratings may need correction, When an agent edits their rating within the configured edit window, Then analytics reflect the latest rating.
Role-Based Access and Data Privacy in Analytics
- Given I am an agent, When I access Outcome Analytics, Then I can view analytics only for my own listings. - Given I am a team lead, When I access Outcome Analytics, Then I can view analytics for my team(s) and cannot access other teams. - Given I am an office admin or broker-owner, When I access Outcome Analytics, Then I can view analytics for all teams within my office(s). - Given I export analytics, When PII could be exposed, Then only authorized roles can export and exports exclude buyer/visitor PII while including permitted listing identifiers.

Task Chains

Defines dependencies and auto‑sequences work (e.g., paint → re‑stage → reshoot → re‑list). Calendar‑aware scheduling respects quiet hours and travel buffers to keep timelines realistic and prevent collisions.

Requirements

Visual Chain Builder
"As a listing agent, I want to quickly assemble a sequence of prep tasks with dependencies so that the plan reflects real‑world order and avoids missed steps."
Description

Provide a drag‑and‑drop editor to compose task chains as a dependency graph with serial and parallel steps. Each task supports duration, assignee/vendor, location, earliest start, due date, prerequisite artifacts (e.g., approved estimate), and blocking conditions. Enforce acyclic graphs with real‑time validation and highlight unmet dependencies. Display a preview timeline and critical path to help users see sequencing impacts before scheduling. Integrates with listing context (address, listing ID) and respects product roles and permissions for who can view/edit chains.

Acceptance Criteria
Compose Serial and Parallel Dependencies via Drag-and-Drop
Given the Visual Chain Builder is opened for a listing When the user drags a task from the palette onto the canvas Then a new task node is created at the drop location with default values and a unique ID Given two existing tasks A and B When the user draws a connector from A to B and releases on B Then a dependency edge A→B is created and B shows A as a prerequisite Given one task A and downstream tasks B and C When the user connects A→B and A→C Then two parallel branches are created and both B and C list A as a prerequisite Given upstream tasks A and C and downstream task D When the user connects A→D and C→D Then D requires completion of both A and C (AND semantics) and displays the count of 2 prerequisites Given a composed graph When the user saves the chain and reopens it from the same listing Then all nodes, edges, and their canvas positions are restored exactly
Prevent Cycles with Real-Time Validation
Given an existing path A→B→C When the user attempts to create an edge C→A Then the edge is blocked, a non-intrusive error appears stating "Cannot create cyclic dependency," and the attempted connector is highlighted in red for 2 seconds Given any attempted connection that would introduce a cycle When the drop is released Then feedback is shown within 250 ms and no cyclic edge is added to the graph Given the graph after a blocked cyclic attempt When the user opens the validation panel Then no cycles are reported and the graph remains acyclic
Edit and Validate Task Attributes
Given a selected task node When the user edits Duration Then the value accepts minutes/hours/days, normalizes to minutes internally, and must be > 0; invalid input shows an inline error and prevents Save Given a selected task node When the user sets Earliest Start and Due Date/Time Then Earliest Start must be ≤ Due; violations display an inline error and prevent Save Given a selected task node When the user sets Assignee/Vendor, Location, Prerequisite Artifacts, and Blocking Conditions Then all entries are persisted; Duration is required, other fields are optional but if provided are saved and retrievable on reload Given a task with invalid required fields When the user clicks Save Then Save is disabled and a validation summary lists the exact fields and nodes to correct
Highlight Unmet Prerequisites and Blocking Conditions
Given a task with a required Prerequisite Artifact defined but not attached When the graph is displayed Then the task node shows a warning badge and tooltip enumerating the missing items; the chain can be saved as a draft Given a task with declared Blocking Conditions marked as required and not satisfied When the user opens the validation panel Then the task appears under "Unmet conditions" with a count and links to the task for remediation Given the user attempts to hand off the chain to scheduling from the builder When any task has unmet required artifacts or blocking conditions Then the handoff is blocked with an error panel listing the specific tasks and unmet items
Preview Timeline and Critical Path Recalculation
Given a chain with durations and dependencies When the builder preview is opened Then a read-only timeline (Gantt) renders with computed earliest start/finish, slack, and a critical path overlay (zero float tasks highlighted) Given the user changes a task duration or adds/removes a dependency When the change is committed Then the timeline and critical path recalculate and visually update within 500 ms Given tasks A(2d)→B(1d) and A→C(5d) and C→D(1d) When the preview renders Then the critical path is A→C→D and those nodes/edges are highlighted as critical
Role-Based View/Edit Permissions
Given a user with Listing Agent or Broker Owner permissions on the listing When they open the Visual Chain Builder Then they can add/edit/delete tasks and dependencies and Save succeeds (HTTP 200) Given a user with view-only access (e.g., team member without edit rights) When they open the Visual Chain Builder Then the canvas loads read-only, edit controls are disabled, and edit attempts return HTTP 403 Given a user without access to the listing When they visit the builder URL Then no chain data is returned and the user receives an access error without leaking listing metadata
Listing Context Association and Persistence
Given the builder is opened from Listing L When the user creates tasks Then each task is associated with Listing ID L and defaults Location to the listing address (editable per task) Given a saved chain for Listing L When the user reopens the builder via L or a deep link containing L Then the exact graph structure, node attributes, and canvas layout are restored Given tasks and timeline data include date/times When the chain is saved Then all timestamps are stored in the listing’s default timezone and render consistently on reload
Calendar‑Aware Auto‑Scheduler
"As a broker‑owner, I want the chain to auto‑populate on our calendars within acceptable hours so that timelines stay realistic without manual coordination."
Description

Implement a scheduling engine that calculates start/end times for each task based on durations, dependencies, user/vendor calendars, listing time zone, quiet hours, working hours, holidays, and lead times. Create tentative holds or confirmed events on connected calendars (Google/Outlook) and attach relevant details and attendees. Prevent overlaps across assigned resources and maintain a baseline schedule for variance tracking. Provide controls for scheduling horizon, hours of operation per stakeholder, and minimum gap rules. Store and update iCal/ICS links for external participants.

Acceptance Criteria
Dependency-Driven Sequencing in Listing Time Zone
Given a task chain with durations and dependencies, When the auto-scheduler runs, Then each task start/end is calculated in the listing's time zone using predecessor completion plus defined lead/lag. Given a task lacks a duration or has circular dependencies, When scheduling is attempted, Then the scheduler blocks the run and surfaces a clear validation error listing the offending tasks. Given a daylight-saving transition occurs within the scheduled window, When times are computed, Then wall-clock times honor the listing time zone DST rules without shifting task durations.
Quiet Hours, Working Hours, and Holidays Compliance
Given quiet hours, stakeholder working hours, and holiday calendars are configured, When tasks are scheduled, Then all task time blocks fall entirely within allowed windows and avoid holidays for each impacted stakeholder. Given no single allowed window can fit a task within the configured horizon, When scheduling is attempted, Then the scheduler returns an "unschedulable" status with the concrete reasons and next available date suggestions. Given a task would cross an allowed-window boundary, When scheduling, Then the task is moved to the earliest next window that can accommodate its full duration. Given an authorized override is applied, When scheduling during quiet hours/holidays, Then the scheduler places the task and records the override actor, time, and rationale in the audit log.
Resource Collision Prevention with Travel Buffers
Given assigned resources have connected calendars, When scheduling tasks, Then no event overlaps existing external events or other chain tasks for the same resource, including enforcement of minimum gap and travel buffers. Given two on-site tasks at different addresses for the same resource, When placed sequentially, Then travel time for the route at the scheduled time is inserted as buffer and respected. Given a new external calendar event creates a conflict post-sync, When detected, Then the scheduler flags the collision within 5 minutes and proposes at least one viable alternative slot respecting all constraints.
Calendar Event Creation: Holds vs Confirmations with Attendees
Given org settings specify tentative holds or confirmed events, When tasks are scheduled, Then events are created in Google/Outlook with the correct status per setting. Given event creation succeeds, When attendees are added, Then assignees and vendor emails are invited, and the event description includes listing address, task instructions, task chain name, and a deep link to the TourEcho task. Given an event already exists for a task, When times or details change, Then the external event is updated in place (same event ID) without creating duplicates. Given the external calendar API returns a transient error, When creating/updating events, Then the scheduler retries with exponential backoff up to a configurable limit and raises an actionable error if exhausted while preserving local intent for later resync.
Baseline and Variance Tracking on Reschedules
Given an initial schedule is published, When saved, Then a baseline snapshot stores per-task start/end and a timestamp that is immutable. Given a task is rescheduled, When saved, Then variance from baseline is calculated in minutes and displayed at task and chain levels. Given multiple reschedules occur, When reviewing history, Then baseline remains unchanged until an explicit re-baseline action is performed by an authorized user, and all prior baselines are archived with actor and timestamp. Given a re-baseline is executed, When confirmed, Then current planned times become the new baseline and variance resets to zero.
Scheduling Controls: Horizon, Lead Times, and Minimum Gaps
Given a scheduling horizon (e.g., 30 days) is configured, When the scheduler runs, Then no task start is placed beyond the horizon and any overflow is reported with reasons. Given task-specific lead times are set, When placing successors, Then the scheduler enforces the minimum wait after predecessor completion before scheduling. Given minimum gap rules per stakeholder are configured, When sequencing tasks on the same resource, Then gaps between adjacent tasks are at least the configured duration. Given any control is changed (horizon, hours, gaps), When the next scheduling run executes, Then the new values are applied and the change is recorded in the configuration audit log.
ICS Link Management for External Participants
Given an external participant without a connected calendar is added to a task, When the event is created, Then an ICS (.ics) link is generated, stored with the participant, and included in the notification. Given a task is rescheduled or canceled, When changes are saved, Then existing ICS links are updated or a cancelation ICS is issued, and old links reflect the new state on fetch. Given a participant is removed from a task, When saved, Then their ICS access is revoked or marked canceled and the action is logged with actor and timestamp. Given a participant opens the ICS link, When viewed in a standard calendar client, Then the event shows correct current start/end, time zone, location, and description.
Constraint Resolver & Conflict Prevention
"As a coordinator, I want clear conflict alerts with one‑click fixes so that I can resolve collisions and keep the timeline intact."
Description

Continuously detect constraint violations (calendar collisions, exceeded quiet hours, missed deadlines, unavailable resources) and surface clear alerts on the timeline. Offer one‑click resolutions with ranked alternatives that explain trade‑offs (e.g., extend deadline vs. shorten duration vs. reassign vendor). Support hard constraints (must finish by) and soft preferences (preferred vendor, preferred day). Allow locking tasks to fixed times, then reflow only flexible tasks around them. Provide a what‑if sandbox to preview impacts before committing changes.

Acceptance Criteria
Calendar Collision Detection & Ranked One-Click Resolutions
- When any schedule change is made, collisions (overlapping tasks for the same resource, location, or space) are detected within 2 seconds and flagged on the timeline with conflict badges. - The alert displays conflict type, impacted tasks, overlap duration, and affected resources. - At least 3 ranked resolution alternatives are shown when feasible, each with a plain‑language trade‑off summary (e.g., end‑date +1d, duration −2h, reassign to Vendor B). - Ranking prioritizes options that satisfy all hard constraints and introduce zero new collisions; ranking score is displayed. - Selecting an alternative applies changes in ≤2 seconds and resolves the collision (no overlaps remain for the conflict scope). - If fewer than 3 feasible alternatives exist, the UI states the reason and shows all available options.
Quiet Hours & Travel Buffers Preservation During Auto-Reschedule
- Given quiet hours are configured (e.g., 8:00 PM–8:00 AM), auto-rescheduling does not place any task start/end inside quiet hours unless the user explicitly overrides with a confirmation. - Configured travel buffers are preserved between sequential tasks for the same resource and location changes; reschedules never compress buffers below configured minimums. - If no feasible schedule exists without violating quiet hours, the system surfaces alternatives (shift dates, change resource) and labels any override option as violating soft preference with clear impact. - After applying any alternative, the timeline shows zero quiet-hour violations and all buffers at or above the minimum. - Validation runs within 2 seconds after each schedule change and updates conflict indicators in place.
Hard Deadline Enforcement with Transparent Trade-Offs
- Tasks or chains marked with a hard "Must finish by" date are never auto-scheduled past that date. - When conflicts arise, all proposed alternatives that meet the hard deadline are ranked above those that extend it; any deadline-extending option is explicitly labeled with the overage (e.g., +12h) and reason. - The chosen alternative updates all dependent tasks while maintaining the original dependency logic (FS/SS/FF/SF) and lag times. - Post-application validation confirms the final end date ≤ hard deadline and zero hard-constraint violations remain. - If no alternative can satisfy the hard deadline, the UI states this and requires explicit confirmation to accept a deadline extension.
Soft Preferences (Preferred Vendor/Day) in Ranking & Explanations
- When generating alternatives, solutions that honor all soft preferences (preferred vendor and preferred day) are ranked above those that honor some or none. - Each alternative includes a preference-fit indicator (e.g., 2/2 met, 1/2 met) and reason tags (keeps preferred vendor, keeps preferred day). - If honoring soft preferences would violate any hard constraint, those options are automatically deprioritized and labeled accordingly. - Selecting any alternative that breaks a soft preference requires no extra confirmation and is logged with the broken preference for audit. - The final schedule after application reflects the selected trade-offs and shows zero hard-constraint violations.
Lock Fixed Tasks; Reflow Only Flexible Tasks
- Tasks marked as Locked cannot be moved by auto-reschedule or dependency reflow; their start/end times remain unchanged. - Locked tasks are visually indicated; attempting to move them prompts an unlock confirmation and does not proceed without consent. - When upstream changes occur, only flexible tasks reflow to satisfy dependencies while preserving locked tasks and minimum buffers. - After reflow, dependency constraints remain valid and no locked task time is altered (±0 minutes). - Validation runs automatically and updates the timeline within 2 seconds of the triggering change.
What‑If Sandbox with Impact Preview & Atomic Commit
- Entering Sandbox mode isolates changes from the live schedule; no live data updates or notifications are sent until Commit. - The sandbox shows an impact summary: projected chain end date change, number of conflicts resolved/introduced, tasks moved, and preference/deadline impacts. - User can Commit or Discard with one click; Discard restores the original schedule exactly. - Commit applies all sandbox changes atomically within 2 seconds and writes an audit log entry including chosen alternative and trade-offs. - Exiting the sandbox without committing leaves the live schedule unchanged and clears sandbox indicators.
Resource Unavailability Handling (Vendors/Agents Calendars)
- When a resource is marked unavailable (busy event, time off, blackout dates), affected tasks display unavailability conflicts on the timeline within 2 seconds. - Generated alternatives include: reschedule around availability, reassign to an available equivalent resource, or adjust duration within allowed bounds. - Each alternative shows impact on end date, travel buffers, and soft preference fit; options violating hard constraints are excluded or clearly labeled. - Applying an alternative resolves the unavailability conflict with zero new collisions and preserves minimum buffers. - If reassignment is chosen, the newly assigned resource receives a notification and the prior resource is unassigned with an audit trail entry.
Travel Buffers & Location‑Aware Sequencing
"As an agent, I want travel time automatically accounted for between tasks so that people aren’t scheduled back‑to‑back across town."
Description

Calculate and insert travel buffers between successive tasks using address data and traffic‑aware ETA estimates. Support default buffer rules per vendor role and allow overrides per task. Ensure buffers also respect quiet hours and resource working windows. Visualize buffers on the timeline and factor them into conflict detection. Cache common route times by market to improve performance and fall back gracefully when mapping APIs are unavailable.

Acceptance Criteria
Auto-Insertion of Traffic-Aware Travel Buffers Between Sequential Tasks
Given two sequential tasks A and B with different addresses assigned to the same resource When the schedule is created or either task’s time or address changes Then the system requests a traffic-aware driving ETA for A→B at the planned departure time And inserts a travel buffer equal to the ETA rounded up to the nearest minute And sets B to start no earlier than A’s end plus the buffer And recalculates and updates the buffer within 5 seconds of any relevant change
Quiet Hours and Resource Working Windows Enforcement for Buffers
Given listing quiet hours and the assigned resource’s working windows And a computed travel buffer between sequential tasks When the buffer or resulting task start would occur during quiet hours or outside working windows Then the system shifts the buffer and next task to the earliest allowed time that preserves task order and dependencies And does not schedule travel during quiet hours And flags a scheduling conflict if no feasible slot exists on the same day
Vendor Role Defaults with Per-Task Buffer Overrides
Given a vendor role with a default travel padding in minutes When computing a travel buffer between tasks Then the effective buffer duration equals ETA plus the role default rounded up to the nearest minute And if a per-task buffer override (minutes) is set, the override replaces the role default for that task And changing or clearing the override immediately recomputes the schedule And an audit entry records who changed the override and when
Location-Aware Sequencing and Same-Address Optimization
Given two or more tasks for the same resource on the same day without hard predecessor constraints When generating the task order Then the system sequences tasks to minimize total travel ETA while respecting declared dependencies, quiet hours, and working windows And sets the travel buffer to 0 minutes when consecutive tasks share the exact same address And does not reorder tasks that have explicit dependency links
Timeline Visualization and Export of Travel Buffers
Given a timeline view containing tasks with travel buffers When the timeline is rendered Then each travel buffer is displayed as a distinct element labeled with “Travel” and its duration And a tooltip shows origin → destination, planned departure time, and ETA source (live/cached/fallback) And buffers are included in exported and printed schedules with start/end times And a user can toggle buffer visibility on the timeline without removing them from scheduling logic
Conflict Detection Including Travel Buffers
Given a resource calendar with tasks and travel buffers When a new or updated task or buffer overlaps any existing task, buffer, quiet hours, or working window boundary Then the system flags a conflict that treats buffers as occupied time And the conflict panel lists the overlapping buffer(s) with start/end and origin/destination And the system proposes an auto-resolution that shifts the affected task to the earliest non-conflicting time after the buffer while preserving dependencies
ETA Caching by Market and Graceful Fallback on Mapping API Failure
Given repeated origin/destination pairs within a market When computing ETAs for travel buffers Then the system caches ETAs by market using keys of origin area, destination area, and 15-minute departure buckets with a configurable TTL And subsequent requests within the TTL use the cached ETA And if the mapping API errors or times out, the system uses the cached ETA if available, else a market-configured default travel time And buffers created under fallback are labeled as “fallback” in UI and a telemetry event is logged
Auto‑Progress & Reflow
"As a listing agent, I want the plan to adjust itself when tasks finish early or late so that the rest of the chain stays accurate without re‑planning."
Description

When a task is completed (via manual check, mobile app, QR scan at the property, or vendor API), automatically unlock dependents and recalculate downstream schedules using actual start/finish times. Respect locked tasks and required approvals, propagate changes to calendars, and notify impacted stakeholders. Maintain an audit log of reflows with before/after times and reasons, and generate concise change summaries for agents and vendors. Provide guardrails to prevent cascading reschedules during quiet hours unless explicitly approved.

Acceptance Criteria
QR Completion Triggers Unlock and Reflow
Given a task chain A -> B -> C with B dependent on A and C dependent on B, and quiet hours disabled And A has estimated finish 13:00 and B originally scheduled 14:00 When A is completed via QR scan at 14:05 Then B is automatically unlocked and rescheduled to start at or after 14:05 plus the configured buffer And C is reflowed based on B’s new times And agent and assigned vendor calendars are updated within 60 seconds And impacted stakeholders receive a notification with a change summary within 60 seconds And an audit log entry is written within 5 seconds capturing before/after times for B and C and reason "A completed via QR at 14:05"
Respect Locks and Approval Gates During Reflow
Given task A -> B -> C and B is marked Locked and C requires Approval And A is completed at 10:00 by any method When the reflow engine evaluates dependencies Then no start/end times are changed for B or C And a pending reflow proposal with proposed new times is created And approval requests are sent to the designated approvers within 60 seconds And calendars remain unchanged until approval is granted And the audit log records the blocked reflow with reasons ["Locked","Approval Required"] and the proposed times
Quiet Hours Guardrail on Cascading Reschedules
Given quiet hours are 20:00–08:00 in the listing’s timezone And completing A at 19:50 would cause B to start at 21:00 When the reflow engine evaluates B Then the system does not auto-reschedule B into quiet hours And a proposal is created for the next allowable window (>= 08:00 next day) with suggested start/end And stakeholders are notified within 60 seconds to approve an override And no calendar changes occur until an explicit override is recorded And if an override is approved by an agent, B is scheduled immediately and calendars update within 60 seconds, with audit reason "Quiet Hours Override"
Vendor API Completion with Calendar Collision Avoidance
Given a vendor marks "Reshoot Photos" complete via API at 11:20 And "Re-list" depends on "Reshoot Photos" And the agent calendar has a non-flexible event 12:00–13:00 When reflow runs Then "Re-list" is scheduled at the earliest available slot that respects configured lead time and avoids 12:00–13:00 And required travel buffers are inserted before in-person tasks And no task overlaps any non-flexible calendar event And stakeholder calendars update within 60 seconds And notifications are sent within 60 seconds and an audit log entry is created within 5 seconds with before/after times and reason "Upstream task completed via vendor API"
Offline Mobile Completion with De-duplication
Given an agent marks A complete offline at 10:12 on mobile And the same task A is also completed via QR at 10:13 When the device syncs at 10:40 Then the server records a single completion for A using the earlier actual finish time 10:12 per precedence rules And only one reflow is executed And all downstream tasks are re-scheduled from 10:12 And the audit log contains one entry with sources ["mobile-offline","qr"], resolution "deduplicated: 10:12 retained", and before/after times for affected tasks
Concise Change Summaries per Role
Given any reflow affects one or more tasks When the reflow completes Then a role-specific summary is generated for agents and vendors within 2 minutes And each summary includes affected tasks, old/new start-end times, delta durations, and reason codes And summaries are limited to the top 5 changes with a link to the full audit log And delivery respects recipient channel preferences (email/push/SMS)
Template Library & SLAs
"As an agent, I want reusable task chain templates with typical durations so that I can launch a plan in seconds."
Description

Offer a library of reusable task chain templates (e.g., paint → stage → shoot → list) with configurable durations, dependencies, and constraints per property type and market. Allow teams to define default SLAs (target cycle times, response windows) and embed them into templates. Support cloning, versioning, and team‑level sharing with permissions. During instantiation, auto‑fill assignees from vendor preferences and listing context while allowing quick edits before scheduling.

Acceptance Criteria
Template Creation with Market-Specific Parameters
Given a user with Template:Create permission, When they create a template, Then they can define name, description, property types (>=1), markets (>=1), task list with dependencies, task durations (minutes/hours/days), and task constraints (working hours, blackout dates, required lead time, travel buffer). Given the task network, When the user attempts to save, Then circular dependencies are detected and prevented with an error that identifies the conflicting tasks. Given durations and dependencies, When saving, Then the system computes and displays critical path and estimated cycle time. Given required fields are missing or units are invalid, When saving, Then the save is blocked and inline validation messages are shown. Given markets and property types are selected, When the template is saved, Then it is filterable by those attributes in the library search.
Default SLA Definition and Embedding
Given a user with Team:ManageSLAs permission, When they define default target cycle time and response windows per role/task type, Then values are stored with units and timezone and validated against allowed ranges. Given a template in draft, When Apply Team Defaults is selected, Then SLA fields populate from team defaults and can be overridden per task. Given SLA values on a template, When the template is published, Then the SLA baseline is locked for that version and is visible in template metadata. Given team defaults change after publication, When viewing an older published template, Then no retroactive change occurs and a banner offers Sync to create new version.
Template Cloning and Versioning
Given a published template v1.0, When a user with Template:Edit selects Clone, Then a new draft template is created with identical tasks, dependencies, constraints, assignee rules, and SLA settings. Given a draft template with changes, When Publish is clicked, Then the version increments (e.g., v1.1), a change summary is required, and the previous version remains read-only and selectable. Given any template version, When viewing history, Then an audit log shows who made changes, when, and what fields changed. Given a draft with unresolved validation errors, When Publish is attempted, Then publication is blocked with specific error messages.
Team Sharing and Permissions
Given a template owner, When they set sharing scope to Private, Team, or Specific Roles/Users, Then only the selected audience can discover and apply the template. Given a user without access, When they attempt to view or apply the template by direct URL, Then a 403 error is returned and the event is logged. Given a user with Viewer permission, When browsing the library, Then they can view metadata and apply the template but cannot edit or publish. Given a user with Editor permission, When accessing a draft, Then they can edit but cannot publish unless they also have Publisher permission.
Instantiation Auto-Fill Assignees from Vendor Preferences
Given a listing with property type, market, and vendor preferences, When a user selects Instantiate for a template, Then tasks auto-assign to preferred vendors that match role and market and are active. Given a role without a preferred vendor, When instantiating, Then the task assigns to the team default role or remains unassigned and is flagged Needs Assignee. Given conflicting preferences across multiple vendors, When instantiating, Then the system selects the highest-priority vendor per team rules and shows the selection rationale. Given SLA response windows on tasks, When instantiating, Then each assignee receives a respond-by and due-by timestamp computed from the listing's timezone.
Pre-Scheduling Quick Edits Before Scheduling
Given an instantiated chain in draft, When the user edits assignees, durations, or dependencies inline, Then changes are validated in real time and the recalculated cycle time is displayed. Given edits that violate constraints (e.g., create a circular dependency or negative duration), When saving, Then save is blocked with targeted error messages. Given edits that push the chain beyond the embedded SLA target cycle time, When saving, Then a warning is shown with the delta and a Proceed anyway option for users with override permission. Given all required fields are valid, When Save & Schedule is clicked, Then the chain transitions to the scheduling step with no blocking errors.
SLA Compliance Tracking and Alerts from Template Defaults
Given a scheduled chain created from a template with SLAs, When tasks start and complete, Then the system computes on-time/late status per task and overall cycle time versus the SLA target. Given predictive risk based on remaining durations and constraints, When the forecast exceeds SLA thresholds, Then an At Risk alert is sent to the owner and assignees with the projected miss. Given a task exceeds its response window, When the threshold passes, Then the task status automatically marks Response Late and appears in SLA breach reports. Given a template version is updated, When new chains are created, Then compliance calculations use the SLAs from the version used to instantiate, not the latest template.
Stakeholder Notifications & Approvals
"As a coordinator, I want vendors to confirm their slots and approve changes so that dependencies won’t slip due to miscommunication."
Description

Provide multi‑channel notifications (in‑app, email, SMS) for scheduling, changes, and upcoming tasks that honor quiet hours and user preferences. Include RSVP/confirm/decline for vendors, with required approvals gating the release of dependent tasks. Escalate when confirmations are missing near deadlines and summarize daily changes in a digest. Track confirmation status on the timeline and expose a single‑click rebook flow when a party declines.

Acceptance Criteria
Quiet Hours-Compliant Multi-Channel Alerts for Upcoming Task
Given a user has configured notification channels (in-app, email, SMS) and quiet hours When a task is scheduled or updated and its start time is within the next 48 hours Then notifications are queued or sent only via the user-enabled channels and never delivered during quiet hours And if a trigger occurs during quiet hours, the message is queued and delivered within 5 minutes after quiet hours end And each delivery is recorded with timestamp, targeted channels, and delivery outcome per recipient
Vendor SMS RSVP with Confirm/Decline and Timeline Update
Given a vendor receives an RSVP request via SMS with reply keywords and links When the vendor replies CONFIRM or taps the Confirm link Then the stakeholder status updates to Confirmed with timestamp and channel=SMS, and the timeline badge updates within 5 seconds When the vendor replies DECLINE or taps the Decline link Then the stakeholder status updates to Declined, a rebook button appears on the task in the timeline, and a decline reason prompt is offered (optional) And all dependent tasks are set to On Hold until rebooked or an alternate resource is confirmed
Approval Gate Blocks Dependent Task Release
Given a task chain includes a dependency requiring Seller/Broker approval When the approval has not been captured via any supported channel (in-app, email approve button, SMS keyword) Then all dependent tasks remain in On Hold and cannot be auto-scheduled or started When the required approval is captured Then dependent tasks transition to Ready and are scheduled per chain rules, and stakeholders receive release notifications honoring quiet hours And the approver identity, timestamp, and channel are logged to the activity feed
Escalation for Missing Confirmations Near Deadline
Given a task requires vendor confirmation and has a start time T When the confirmation is still Pending at T-24h Then escalation #1 is sent to the vendor and notifications are sent to the assigned agent and backup contact per preferences When the confirmation is still Pending at T-4h Then escalation #2 is sent And all escalations honor recipient quiet hours; if an escalation time falls inside quiet hours, it is queued and delivered within 5 minutes after quiet hours end or at least 30 minutes before T, whichever comes first And each escalation is logged and visible on the task activity feed
Daily Digest of Scheduling Changes
Given a user has Daily Digest enabled When the local time reaches 07:00 for the user Then a single digest is sent summarizing the past 24 hours of activity: tasks created, rescheduled, canceled, confirmed, declined, approvals captured, and escalations sent And the digest includes counts and per-item links back to the timeline And delivery respects channel preferences and quiet hours; if 07:00 is within quiet hours, deliver within 5 minutes after quiet hours end And users can opt out or change the delivery time in settings
Single-Click Rebook Flow on Decline with Calendar Awareness
Given any required stakeholder has Declined a task When the agent clicks the Rebook button on the task in the timeline Then a reschedule modal opens within 2 seconds showing at least 5 next viable time slots that respect participant availability, quiet hours, and travel buffers And selecting a suggested slot and clicking Send completes the rebooking without additional required fields And the system updates the task start time, re-issues invitations, and updates dependent tasks accordingly, logging all changes
Timeline Status Indicators for Stakeholder Confirmations and Approvals
Given a task has multiple stakeholders and a required approval gate When any stakeholder’s confirmation or the approval status changes Then the task shows per-stakeholder badges (Pending, Confirmed, Declined, Escalated) and an overall gate state (Ready, On Hold) on the timeline And UI updates appear within 5 seconds of the change And badges include accessible labels and tooltips with last update time and channel And users can filter the timeline to show only tasks with Pending or Declined statuses

Seller Progress

Provides a seller‑friendly view of tasks, costs, and statuses with quick approvals for budget or scope. Builds trust, reduces back‑and‑forth, and keeps everyone aligned on what’s happening next.

Requirements

Seller Progress Dashboard
"As a home seller, I want a simple dashboard that shows what’s done, what’s next, and what it costs so that I can stay informed and reduce back-and-forth with my agent."
Description

A unified, seller‑friendly dashboard that consolidates all listing activities into a single view, including task progress, upcoming milestones, pending approvals, cost-to-date vs. budget, and next recommended actions. It pulls real-time data from TourEcho’s showing schedule and AI feedback summaries to highlight top objections and their impact, enabling data-backed decisions. The dashboard supports multiple sellers, is mobile-optimized, offers configurable widgets, and provides deep links into specific tasks, approvals, and documents. It refreshes automatically as tasks complete or budgets change, ensuring both agent and seller see the same source of truth.

Acceptance Criteria
Unified Seller Dashboard Overview
- Given an authenticated seller viewing a listing’s dashboard, When the page loads, Then the dashboard displays in one view: task progress (percent complete), upcoming milestones (dates), pending approvals (count and list), cost-to-date vs. budget (amounts and variance %), and next recommended actions (up to 5 items). - Given live source data exists, When the dashboard loads, Then displayed figures match the system of record within ±1% or ±$1, whichever is smaller. - Given a section has no data, When the dashboard loads, Then that section shows an empty state with explanatory text and a CTA to configure or connect data (no errors shown).
Real-time Refresh and Agent-Seller Sync
- Given a task status changes or a budget entry updates, When the change is saved in TourEcho, Then the seller dashboard reflects the change within 15 seconds without full page reload. - Given an agent and a seller both have the dashboard open, When one approves a pending item, Then the other view updates within 15 seconds to show the approval and updated counts. - Given the realtime channel disconnects, When data changes occur, Then the dashboard shows a reconnecting indicator and applies queued updates within 5 seconds after reconnection or immediately on manual refresh.
AI Feedback Insights and Objection Impact
- Given at least 5 showings occurred in the last 14 days, When the dashboard loads, Then it displays the top 3 objections with sentiment trend (7-day) and an estimated impact score on days-on-market or price, with an info tooltip describing methodology. - Given an objection is displayed, When the user selects View sources, Then the system reveals underlying showing feedback entries with timestamps and anonymized buyer-agent IDs. - Given there are no showings in the last 7 days, When the dashboard loads, Then the insights module shows “Insufficient recent data” and hides impact scores while still showing lifetime top objection if available.
Quick Approvals for Budget and Scope
- Given a pending approval with amount ≤ the configured auto-approval threshold, When the seller taps Approve, Then the approval is recorded, the budget remaining updates accordingly, and a confirmation banner appears within 2 seconds. - Given an approval is completed, When viewing the audit log, Then it shows approver name, timestamp (UTC), amount, and before/after budget values. - Given an approval exceeds the threshold or missing required documentation, When the seller taps Approve, Then the action is blocked with a specific reason and a link to the required document(s).
Multi-Seller Access and Permissions
- Given a listing with multiple sellers, When Seller A reorders or hides widgets, Then Seller B’s layout remains unchanged (customization is per user per listing). - Given a listing with multiple sellers, When any seller approves an item, Then the approval applies to the listing and the audit log records which seller approved. - Given a seller lacks document permissions, When they open a deep link to a restricted document, Then access is denied with a request-access flow and no document contents or metadata are exposed beyond title and listing name.
Mobile-Optimized Experience
- Given a modern smartphone on 4G, When the dashboard loads cold, Then Largest Contentful Paint ≤ 2.5s and total compressed JS payload ≤ 300 KB. - Given viewport widths 360–768 px, When rendering the dashboard, Then all widgets are legible, tap targets are ≥ 44×44 px, and no horizontal scrolling is required. - Given a deep link is opened from SMS/email on mobile, When the app is installed, Then it opens the in-app screen; When not installed, Then it opens the mobile web view for the same item after authentication.
Configurable Widgets and Deep Links
- Given a seller customizes widget order and visibility, When they revisit the dashboard on any device, Then the layout persists for that seller and listing until reset. - Given the seller selects Reset to default, When they confirm, Then the layout reverts to system default and all customizations are cleared. - Given a deep link URL to a task, approval, or document, When opened by an authenticated user with access, Then it navigates directly to the specific item with correct listing context; When the link is expired or item missing, Then the user is redirected to the dashboard with an explanatory error banner.
One‑Tap Budget Approval Flow
"As a seller, I want to quickly approve or decline proposed expenses with context so that decisions don’t stall progress or create confusion."
Description

Streamlined approval workflow for budget items and scope changes that presents clear context: item description, rationale (including buyer feedback signals), cost estimate or quote options, schedule impact, and risk if not approved. Sellers can approve or decline with one tap, add a note, and e-sign; the system records timestamps, updates task scope automatically, adjusts budget totals, and notifies stakeholders. Includes safeguards like approval expiry, required fields for material changes, and optional counter-proposals. Integrates with payment links or escrow instructions when applicable.

Acceptance Criteria
Approve Budget Item via Mobile Push Notification
Given a seller receives a push notification for a single budget approval that includes item description, rationale with at least one buyer feedback signal, cost estimate or selected quote, schedule impact, and risk-if-not-approved, When the seller taps Approve on the deep link, Then the app opens the approval sheet with the request context and visible e-sign field. Given the approval sheet is displayed, When the seller taps Approve and completes a valid e-sign, Then the system records an approval timestamp (UTC), signer identity, and signature hash, updates the related task scope to Approved, and adjusts project budget totals within 2 seconds. Given the approval is recorded, Then stakeholders (listing agent and vendor if applicable) receive notifications within 10 seconds, and Seller Progress shows an Approved badge for the item. Given network retries or duplicate taps, Then the action is idempotent and produces a single approval record with no duplicate budget adjustments.
Decline Budget Item with Note from Seller Progress
Given a pending budget approval request is open in Seller Progress, When the seller selects Decline, Then a note field is required (minimum 10 characters) and e-sign is optional. Given the seller submits a Decline with a valid note, Then the system records a decline timestamp (UTC) and reason, leaves task scope unchanged, and does not alter budget totals. Then the listing agent and vendor (if applicable) are notified within 10 seconds with the seller’s note included, and the request displays a Declined state and is no longer actionable.
Approval Request Expiry Handling
Given an approval request has an expiry (e.g., 72 hours) configured, When current time exceeds the expiry, Then the request shows Expired, Approve/Decline actions are disabled, and an agent option to Resend is available. Given a user attempts to approve via an expired deep link, Then the API returns 410 Gone, the UI displays Request expired, and no scope or budget changes occur. Then the expiry event is written to the audit log with timestamp and request identifier.
Seller Counter‑Proposal to Quote
Given counter‑proposals are enabled on a request, When the seller selects Counter, Then the UI requires at least one of: alternate amount, alternate vendor/quote selection, or alternate schedule date, and allows an optional note. Then the request enters Pending Counter, a review task is created for the agent, and no scope or budget totals are changed until acceptance. Given the agent accepts the counter, Then the system updates task scope and budget totals to the counter values, timestamps the decision, and notifies all parties; if rejected, Then the request returns to Pending with agent rationale attached.
Enforce Required Fields for Material Scope Changes
Given an approval request is flagged as a Material Change, When the agent submits it for seller approval, Then the system validates presence of buyer feedback signal(s), schedule impact (in days), risk-if-not-approved text, and at least one cost estimate or quote. When any required field is missing, Then submission is blocked with inline error messages and the seller cannot be notified until issues are resolved. Given a Material Change request is presented to the seller, Then Approve requires e-sign and Decline requires a reason; both actions capture timestamps and user identity.
Payment Link or Escrow on Approval
Given an approval request requires payment, When the seller approves and e-signs, Then the app immediately presents the configured payment link or escrow instructions. Given payment is completed via link, Then the payment confirmation ID is attached to the approval record, the task moves to Funded, and vendors are notified; if payment fails or is abandoned, Then status is Awaiting Payment and automated reminders are scheduled per configuration. Given escrow is used, Then the system waits for external confirmation before moving to Funded and prevents duplicate payment link issuance. All payment events are recorded in the audit trail and do not change scope beyond the approved values.
Prevent Conflicting Approvals and Stale Data
Given two active sessions view the same approval request, When one session approves or declines, Then the other session reflects the new state within 5 seconds and disables further actions. Given any subsequent attempt to act on a finalized request, Then the API returns a safe no-op with an Already approved/declined message and no duplicate budget or scope adjustments. Budget total adjustments and scope updates execute in a single transaction; on partial failure, Then neither change is committed, an error is shown, and the system retries up to 3 times before logging a failure event.
Task & Scope Management
"As a listing agent, I want to create and manage scoped tasks with owners and deadlines so that the seller knows what to do and we hit our milestones."
Description

Structured task management tailored to listing prep and marketing phases (e.g., pre-list, on-market, offer negotiation), with assignees (agent, seller, vendor), due dates, dependencies, status states, and cost estimates. Supports checklists, attachments, and instructions; task templates by property type; automatic status roll-ups to show phase completion. Scope changes are tracked and linked to approvals, with inline indicators when a task requires budget authorization. Integrates with showing calendars to avoid scheduling conflicts and updates the Seller Progress dashboard in real time.

Acceptance Criteria
Create Tasks from Property-Type Template with Phase Roll-Up
Given a listing with a property type and an available template for that type When the agent applies the template Then tasks are created under their designated phases with titles, assignees (agent/seller/vendor), due dates (relative offsets resolved to calendar dates), cost estimates, instructions, and checklist items as defined in the template And the number of tasks created equals the template definition And initial phase progress shows 0% complete if all new tasks are Not Started And phase progress updates to (Done tasks / Total tasks in phase) × 100% when task statuses change And all created tasks appear on the Seller Progress dashboard within 3 seconds
Task Detail: Checklist, Attachments, and Instructions
Given an existing task When the agent adds checklist items Then the task displays the checklist with item count and 0% completion When a user marks checklist items complete Then the checklist completion percentage equals (completed/total) × 100% And if all checklist items are complete and no dependencies are open, the user can set the task to Done Given allowed attachment types PDF, JPG, PNG, DOCX up to 25 MB each When a user uploads an attachment Then the file is stored, virus-scanned, associated to the task, and is downloadable by assigned roles And image/PDF thumbnails are generated and viewable When instructions are entered as rich text Then they render without truncation and are readable on the task detail view
Dependencies and Status Transitions Enforcement
Given Task B depends on Task A When Task A is not Done and a user attempts to set Task B to In Progress or Done Then the system keeps Task B as Blocked and shows “Blocked by Task A” When Task A transitions to Done Then Task B automatically unblocks to Not Started Given allowed status transitions: Not Started → In Progress → Done; Not Started ↔ Blocked; In Progress ↔ Blocked; Any → Waiting Approval → Approved/Rejected When a user attempts an invalid transition Then the system prevents the change and displays a descriptive validation message When a task is past its due date and is Not Started or In Progress Then the task is flagged Overdue and appears in overdue counts for that phase
Budget Authorization Indicator and Approval Flow
Given a task has a non-zero cost estimate or an attached vendor quote When the cost is saved Then the task displays a Needs Budget Approval indicator for the seller and moves to Waiting Approval And the seller receives an approval request When the seller selects Approve and confirms Then the task status becomes Approved, an approval record (approver, timestamp, amount) is saved, and the budget total updates within 3 seconds When the seller selects Decline Then the task status becomes Rejected, a decline reason is required, and the task remains blocked from starting And tasks in Waiting Approval cannot transition to In Progress or Done
Scope Change Tracking Linked to Approvals
Given a baseline scope exists for the listing When a task is added or removed, or its cost estimate changes by ≥10% or ≥$100 (whichever is greater) Then a scope change record is created with before/after values, author, timestamp, and rationale And if the scope change adds work or increases cost, it requires seller approval When the seller approves the scope change Then linked tasks and budgets update to the new values and the approval is linked to the scope change record When the seller declines the scope change Then proposed changes are not applied and affected tasks revert to the baseline state
Calendar Conflict Detection with Showing Schedule
Given the property has confirmed showing events When a user schedules a task with a time window that overlaps a showing Then the system blocks saving and presents the next three available conflict-free slots When a new showing is added that overlaps an already scheduled task Then the system flags the conflict within 10 seconds, notifies the task assignee and the agent, and suggests reschedule options And when the task or showing is rescheduled to a non-overlapping time, the conflict indicator is removed immediately
Real-Time Seller Progress Dashboard Update
Given the Seller Progress dashboard is open When a task is created, updated, or deleted; a status changes; a cost estimate changes; or an approval outcome is recorded Then the dashboard reflects the change within 3 seconds without manual refresh And phase roll-up percentages, budget totals, and approval badges recalculate accurately And the activity timeline entry includes actor, timestamp, and before/after values for the change
Approval Audit Trail & Change History
"As a seller, I want a transparent history of decisions and costs so that I can trust the process and reference why changes were made."
Description

An immutable, filterable log that captures every approval, decline, scope modification, budget adjustment, and task status change with who/when/what details. Displays inline history on each task and approval card and supports export to PDF for compliance or brokerage auditing. Prevents tampering via role permissions and integrity checks while maintaining readability for non-technical sellers. Exposes API endpoints/webhooks for back-office systems to reconcile decisions or archive records.

Acceptance Criteria
Event Capture Completeness
Given any action among [approval_granted, approval_declined, scope_modified, budget_adjusted, task_status_changed] When the action is committed Then a new audit entry is recorded with fields: id, listing_id, event_type, actor_user_id, actor_name, actor_role, subject_type, subject_id, previous_value, new_value, reason_comment (nullable), timestamp (ISO 8601 with timezone), request_id, source (UI/API/Webhook), correlation_id (nullable). Given an audit entry exists When retrieved via UI or API Then previous_value and new_value are preserved exactly and fields designated PII are redacted per policy (e.g., email masked to a****@domain.com). Given a batched change (e.g., multi-task status update) When processed Then one parent audit entry is created and child entries per affected item are created and linked via the same correlation_id.
Immutability & Tamper Prevention
Given an existing audit entry When any user attempts to update or delete it via UI or API Then the request is denied with 403 Forbidden and a security audit event is recorded indicating attempted mutation (actor, action, timestamp), and the original entry remains unchanged. Given audit entries are stored When written to persistence Then each entry includes content_hash (SHA-256 over canonicalized fields) and hash_chain_prev (hash of prior entry for the same listing), enabling chain verification. Given the nightly integrity job When it runs Then 100% of chains are verified; any break triggers an alert and flags the affected listing in the admin dashboard with details of the first failing link.
Filterable, Searchable Audit Log
Given a listing’s audit log When the user applies filters by event_type, actor_user_id, actor_role, subject_type, subject_id, date_range, and contains(text) Then the results reflect filters exactly, show total_count, and first page returns within 1 second for up to 10,000 records. Given filtered results When the user sorts by timestamp (asc/desc) or actor_name Then sorting is correct and stable across pages. Given the result set exceeds 50 items When paginating Then default page_size is 50 (configurable up to 200), and next/prev navigation preserves filters and sort via cursor; no duplicates or gaps across pages.
Inline History on Task and Approval Cards
Given a task detail view When the view loads Then the 5 most recent related audit entries are displayed inline with human-readable summaries (e.g., “Alex P. changed status from In Review to Approved”), relative timestamps (e.g., “2h ago”), and tooltips showing exact ISO timestamps. Given an approval card When the user clicks “View history” Then the audit log opens pre-filtered to that approval’s subject_type and subject_id. Given the inline history list When evaluated for accessibility Then all items have semantic roles/labels readable by screen readers and meet WCAG AA contrast ratios.
PDF Export for Compliance
Given a user with export permission When exporting “Full log” or a filtered subset Then the generated PDF includes: listing_id, listing address, seller name, exporter_user_id, export timestamp (ISO 8601 with timezone), applied filters summary, page numbers, total_entries, and each entry with id, timestamp, event_type, actor (name & role), subject, previous_value, new_value, reason_comment. Given a PDF is generated When the file is produced for up to 10,000 entries Then generation completes within 10 seconds and the footer includes a SHA-256 checksum of the source JSON and a signature block identifying the exporting organization. Given a PDF export completes When recorded Then an audit entry is added capturing exporter_user_id, filter parameters, total_entries, and checksum.
API Endpoints & Webhooks for Back Office
Given an API consumer When calling GET /audit-entries with supported filters and pagination Then the API returns 200 with items[], total_count, next_cursor, and enforces RBAC; p95 latency ≤ 500 ms for typical queries (≤ 2k search span). Given a new audit event occurs When webhooks are configured Then the system POSTs to subscribers with payload {id, listing_id, subject, event_type, previous_value, new_value, actor, timestamp} and an HMAC-SHA256 signature header; on 5xx/timeout, retry with exponential backoff for up to 24 hours; deduplicate via idempotency key. Given sustained high request volume When rate limits are exceeded Then responses are 429 with Retry-After and other tenants are unaffected (no error rate increase >1%).
Role-Based Access & Redaction
Given a seller user When viewing the audit log Then entries are visible but fields marked sensitive (e.g., internal notes, private cost breakdown) are redacted and the Export to PDF action is hidden. Given a broker-owner or listing agent with Auditor permission When accessing audit logs and export Then full, unredacted entries are visible and PDF export is enabled. Given an unauthorized user or role When attempting to view or export the audit log Then access is denied with 403 and a security audit event is recorded with actor, attempted resource, and timestamp.
Feedback‑to‑Task Conversion
"As a listing agent, I want to turn buyer feedback into clear action items so that we can address objections quickly and improve outcomes."
Description

Transforms AI-summarized buyer sentiment and room-level objections into actionable, suggested tasks with auto-assigned priority, estimated impact on showability, and cost ranges. Users can accept, edit, or dismiss suggestions; accepted items convert to scoped tasks and, if needed, prefill an approval request. Tracks before/after metrics to show whether completing the task improved feedback or reduced time-to-offer, closing the loop on ROI.

Acceptance Criteria
Generate Suggested Tasks from Feedback
Given a listing has at least one AI-summarized showing with actionable objections and sentiment drivers When the Feedback-to-Task engine runs for that listing Then the system generates at least one suggested task per actionable objection or driver And each suggested task includes: title, room/location, rationale summary, source reference ID(s), auto-assigned priority, estimated showability impact, and a cost range (min and max) And each suggested task is associated to the listing and is visible in Seller Progress under "Suggestions"
Auto Priority and Impact Scoring
Given objections are classified with severity Major, Moderate, or Minor and their frequency across showings is computed When suggested tasks are generated Then priority is set to High if severity is Major or frequency >= 20%; Medium if severity is Moderate or frequency between 10% and 19%; otherwise Low And estimated showability impact is High for High priority, Medium for Medium priority, and not High for Low priority And priority and impact values are stored and retrievable via API and UI
Accept, Edit, or Dismiss Suggestions
Given a user views suggested tasks for a listing When the user accepts a suggestion Then a scoped task is created in Seller Progress with all fields carried over (title, room, priority, impact, cost range, rationale, source IDs) And the suggestion is removed from "Suggestions" and appears in the Seller Progress task list When the user edits a suggestion before accepting Then the edits persist in the created scoped task upon acceptance When the user dismisses a suggestion Then it is removed from "Suggestions" and logged with a dismissal reason and timestamp in the audit trail
Prefill Approval Request for Budget/Scope
Given a listing has an approval threshold configured And a suggestion has a cost range with max value exceeding the threshold or is flagged Requires Approval When the user accepts the suggestion Then an approval request is opened with scope, cost range, priority, impact, and rationale prefilled And submitting the approval request stores an approval record linked to the task And if the threshold is not exceeded and the suggestion is not flagged, no approval request is opened
Cost Range Editing and Validation
Given a user edits the cost range on a suggestion or scoped task When the user enters values Then values must be non-negative numerics in the listing currency, with min ≤ max And invalid inputs show inline validation errors and block save And successful save updates the displayed cost range and persists via API
Before/After Metrics and ROI Signal
Given a scoped task was created from a suggestion and is later marked complete When at least 3 showings occur in the 14 days before completion and at least 3 showings occur in the 14 days after completion Then the system computes and displays: change in objection mentions for the targeted issue (count per showing), change in overall sentiment score, and change in median time-to-offer (days) if an offer occurs And the outcome is labeled Improved, No Change, or Worse based on the sign of the deltas And if the minimum data conditions are not met, the system displays "Insufficient data" instead of metrics
Notifications & Reminders
"As a seller, I want timely, unobtrusive reminders for items needing my attention so that nothing slips and I don’t feel overwhelmed."
Description

Configurable in-app, email, and SMS notifications for new tasks, pending approvals, overdue items, cost changes, and schedule shifts. Supports digest summaries, quiet hours, escalation rules, and per-user preferences. Messages include deep links to the relevant item in Seller Progress and respect role-based visibility. Includes failovers for undelivered messages and rate limiting to avoid alert fatigue.

Acceptance Criteria
Per-User Notification Preferences by Event Type and Channel
- Given a user opens Notification Preferences, When they view settings, Then they can enable/disable channels (in-app, email, SMS) per event type (new task, pending approval, overdue item, cost change, schedule shift). - Given preferences are saved, When an event occurs for a disabled channel/type, Then no notification is sent on that channel and a suppression entry is recorded with timestamp and actor. - Given preferences are saved, When an enabled event occurs, Then notifications are sent only on enabled channels within 2 minutes of event creation and recorded as delivered. - Given a user updates their notification preferences, When they save, Then changes take effect immediately and a version history entry is stored with timestamp.
Quiet Hours with Critical Override
- Given a user sets quiet hours 21:00–07:00 local time with critical override set for overdue items older than 48 hours, When a non-critical event occurs during quiet hours, Then the notification is queued and delivered at 07:00 local time. - Given the same settings, When an overdue item crosses 48 hours during quiet hours, Then a single critical notification bypasses quiet hours and is sent via in-app and email only (not SMS). - Given queued notifications exist, When quiet hours end, Then queued items are dispatched within 5 minutes in a single bundled send per user and logged as "delivered after quiet hours".
Daily and Weekly Digest Summaries
- Given a user opts into a daily digest at 08:00 and a weekly digest on Monday 08:00, When the digest is generated, Then it includes counts and lists of new tasks, pending approvals, overdue items, cost changes, and schedule shifts for the period, grouped by listing. - Given digest entries contain deep links, When the user clicks an entry, Then the app opens to the specific item with the period filter applied. - Given the user selects "digests only", When events occur in real time, Then no real-time notifications are sent and only the scheduled digests are delivered.
Time-Based Escalation for Pending Approvals
- Given a pending approval is assigned to a seller with escalation rules to agent at 24 hours and broker at 48 hours, When 24 hours pass without action, Then an escalation notification is sent to the agent including the approval details, original request time, and deep link. - Given the same item, When 48 hours pass without action, Then an escalation notification is sent to the broker and includes the escalation chain and deep link. - Given the seller approves or rejects before the threshold, When the timer would fire, Then no escalation is sent and the timer is cancelled with an audit entry.
Deep Links and Role-Based Access Enforcement
- Given a notification contains a deep link, When a user with the required role opens it, Then they land on the target Seller Progress item within 2 seconds p95 and only permitted fields are visible. - Given a user without required role opens a deep link, Then an access-denied state is shown and no sensitive data appears in the notification preview or URL. - Given link tokens expire after 24 hours, When an expired link is used, Then the user is prompted to authenticate and the link resolves only if current access is valid.
Delivery Failover and Retry Strategy
- Given an email send returns a hard bounce or 5xx, When the failure is detected, Then the system retries up to 3 times with exponential backoff and falls back to SMS if enabled; otherwise in-app only. - Given an SMS returns "Undelivered" from the carrier, When detected, Then a fallback email is queued if enabled; otherwise an in-app banner is posted to the user’s notification center. - Given all channel attempts fail, When final failure occurs, Then an alert with correlation ID and failure reason is logged and visible in the admin delivery dashboard within 5 minutes.
Rate Limiting and De-duplication
- Given rate limits are configured to max 3 notifications per user per 10 minutes per event type and max 10 total per hour, When incoming events exceed these limits, Then excess notifications are suppressed and combined into a single summary notification per user per window. - Given duplicate events for the same item and type occur within 60 seconds, When detected, Then only one notification is delivered containing a deduplication count. - Given suppression occurs, When the user opens the notification center, Then a badge shows the suppressed count and a roll-up entry lists the combined items.
Vendor Quote Intake & Comparison
"As a listing agent, I want to collect and compare vendor quotes in one place so that I can recommend the best option and accelerate seller approvals."
Description

Centralized intake of vendor quotes via file upload, email forwarding, or link share, with OCR and structured parsing of line items, totals, availability dates, and terms. Presents side-by-side comparisons, flags missing data, and allows selection of preferred vendors. Selected quotes attach to the related task and pre-populate the approval flow; all documents are stored and retrievable from the Seller Progress view. Optional vendor-only view supports secure collaboration without exposing sensitive listing data.

Acceptance Criteria
File Upload: OCR and Structured Parsing
Given an agent is on a task in Seller Progress When they upload a quote file of type PDF, JPG, or PNG up to 10 MB Then the system stores the original file and starts OCR within 60 seconds And extracts vendor name, line items (description, quantity, unit price, line total), subtotal, taxes/fees, grand total, availability start/end dates, payment terms, and quote validity date into structured fields And if any required field is missing, the field is flagged "Missing" and the quote is marked "Incomplete" with inline edit enabled And if the uploaded file matches an existing quote by checksum or (vendor name + grand total + file size), the user is warned of a potential duplicate and can choose Replace or Keep Both Given a file exceeds 10 MB or is not an allowed type When the upload is attempted Then the system blocks the upload and displays an error without storing the file And a retry guidance message is shown
Email Forwarding Intake and Association
Given a task has a unique intake email alias (e.g., quotes+TASKID@tourecho.com) When an agent forwards a vendor email with one or more quote attachments (PDF/JPG/PNG) to that alias Then the system associates the email with the correct task via the alias And creates a quote record per attachment, preserving the original email (.eml) as an artifact And runs parsing on each attachment within 2 minutes of receipt Given an email has no attachments but contains a tabular or line-item quote in the body When it is forwarded to the alias Then the system attempts body parsing; if unsuccessful, the quote is created as "Unparsed" with a "Needs Manual Entry" flag Given an email is sent to an invalid or revoked alias When it is received Then the sender gets an automated rejection response explaining the error and no quote is created
Vendor Link Share Submission (Vendor-Only View)
Given an agent generates a tokenized HTTPS submission link for a task When a vendor opens the link Then the page loads a vendor-only form showing only the task title and high-level scope; seller identity, agent notes, and other vendor data are not displayed And the vendor must provide company name, contact email, at least one line item or upload a quote file, grand total, availability start date, and payment terms before submitting And a CAPTCHA (or email code verification) must be completed before acceptance Given the link is expired (default 14 days) or revoked When a vendor attempts to access it Then an "Link expired or revoked" message is shown without revealing task details And no data is accepted Given a vendor completes the submission When they click Submit Then the system creates a quote record, stores the original upload, parses within 60 seconds, and emails confirmation to the vendor and agent
Side-by-Side Comparison and Normalization
Given a task has two or more quotes When the agent opens Compare Quotes Then a table displays each quote in its own column with normalized fields: vendor name, line items grouped by category, subtotal, taxes/fees, grand total, availability start date, payment terms, and quote validity And the lowest grand total and earliest availability are visually highlighted And the agent can sort by grand total, availability, or vendor name and filter to show only Complete quotes And the table shows dollar and percentage deltas relative to the lowest grand total And quotes with missing required fields display "Missing" badges for each missing field And clicking a quote opens the original document in a viewer
Manual Edit, Validation, and Audit Trail
Given a parsed quote exists When the agent edits extracted fields or adds/removes line items Then changes are saved immediately, versioned, and an audit entry records user, timestamp, field, and before/after values And currency and quantity fields validate locale formatting, prevent negative values, and recalculate subtotals and totals in real time And required fields (vendor name, grand total, availability start date, quote validity date) must be present for the quote to be marked Complete When the agent clicks Mark Complete Then the action is blocked if any "Missing" flags remain and a list of unresolved fields is displayed
Select Preferred Vendor and Pre-Populate Approval Flow
Given at least one Complete quote exists for a task When the agent clicks Select Preferred on a quote Then that quote is marked Preferred and attached to the task; any previously Preferred quote is demoted to Candidate with a required reason note And a Seller Approval request is created with vendor name, scope summary, grand total (with taxes/fees), earliest availability, payment terms, and quote validity, and set to status "Pending Seller Approval" And the seller receives a notification in Seller Progress and via email; the agent receives a confirmation When the agent attempts to select a quote marked Incomplete Then the action is blocked with an explanation of missing required fields
Storage, Retrieval, and Permissions in Seller Progress
Given quotes exist on a task When an agent opens Seller Progress > Task > Quotes Then all quote records are listed with source (Upload/Email/Vendor Link), timestamps, status (Complete/Incomplete/Preferred), and quick actions (View, Edit, Compare) And each quote’s original file is downloadable and parsed data is viewable When a seller opens the same task in Seller Progress Then only the Preferred quote and any approved documents are visible by default; non-selected quotes are hidden from seller view And vendors cannot access Seller Progress; vendor-only links cannot retrieve Seller Progress content And search within the listing for vendor name or line item keywords returns matching quotes within 2 seconds

Quiet Profiles

Create layered quiet-hour policies by office, team, agent, listing, and day type (weekday/weekend/holiday). Compliance Admins set org defaults; agents choose personal windows within bounds. Prevents rogue after‑hours bookings and keeps schedules humane while honoring local norms and seasonal shifts.

Requirements

Hierarchical Quiet Policy Engine
"As a Compliance Admin, I want quiet-hour rules to inherit by org, office, team, agent, listing, and day type so that standards are enforced while allowing targeted exceptions."
Description

Implement a rules engine that calculates the effective quiet-hours window for any booking context using layered policies across organization, office, team, agent, listing, and day type (weekday/weekend/holiday). The engine must support inheritance with clear precedence, conflict resolution, and hard/soft constraints (non-overridable vs. agent-adjustable within bounds). Inputs include user/listing IDs, market/office, date/time, and time zone; outputs include isBookable flags, reason codes, next-available slot suggestions, and an audit key. The engine must be time zone and DST-aware, support multiple daily windows, seasonal applicability, and performant evaluation with caching. Provide deterministic APIs for synchronous checks in UI and scheduling services, and ensure full auditability of rule versions and decisions.

Acceptance Criteria
Precedence & Inheritance Across Policy Layers
Given org hard quiet 20:00–08:00 (office tz), team soft bounds allow 21:00–07:00, agent selects 22:00–07:00 within bounds, and listing adds hard 12:00–13:00, When checking a weekday at 22:30 local, Then isBookable=false and reasonCodes contains QUIET_ORG_HARD and nextAvailable.start=08:00 local. Given the same policies, When checking a weekday at 12:15 local, Then isBookable=false and reasonCodes contains QUIET_LISTING_HARD and nextAvailable.start=13:00 local. Given agent attempts a soft window 19:00–06:00 while org hard is 20:00–08:00, When checking 19:30 local, Then isBookable=true and reasonCodes=[], because soft cannot widen beyond parent hard bounds. Given team soft 21:00–07:00 and agent soft 22:00–06:00 (both within bounds), When checking 21:30 local, Then isBookable=true and reasonCodes=[] because the more specific agent soft window applies.
Day Type (Weekday/Weekend/Holiday) Resolution by Market Calendar
Given office market calendar marks 2025-12-25 as holiday, org hard quiet for holidays is 18:00–10:00 and for weekdays is 20:00–08:00, When evaluating 2025-12-25T19:00 in office tz, Then isBookable=false and reasonCodes contains QUIET_DAYTYPE_HARD(holiday) and nextAvailable.start=10:00 2025-12-26 office tz. Given weekend hard quiet 21:00–09:00, When evaluating a Sunday at 08:30 office tz, Then isBookable=false and reasonCodes contains QUIET_DAYTYPE_HARD(weekend) and nextAvailable.start=09:00. Given user locale differs from listing/office market, When evaluating a market holiday at the listing’s office tz, Then day type is derived from office/listing market calendar (not user locale) and decision reflects holiday policy.
Time Zone & DST-Aware Evaluation
Given listing tz=America/Los_Angeles and org hard quiet 20:00–08:00, When a New York user checks 2025-06-10T23:30 America/New_York, Then engine evaluates in listing/office tz and treats it as 2025-06-10T20:30 America/Los_Angeles, so isBookable=false and reasonCodes contains QUIET_ORG_HARD. Given DST spring-forward date 2025-03-09 America/Los_Angeles and hard quiet 01:00–03:00, When checking 02:30 local (non-existent wall time), Then engine treats it as within the quiet window and isBookable=false and nextAvailable.start=03:00 local. Given DST fall-back date 2025-11-02 America/Los_Angeles and hard quiet 00:00–02:00, When checking 01:30 local in the second 01:00 hour (PST), Then isBookable=false and auditKey includes offset disambiguation to ensure replayable uniqueness.
Multiple Daily Windows and Seasonal Rules
Given weekday soft window 12:00–13:00, hard window 20:00–08:00, and seasonal hard 17:00–19:00 active Jun 1–Aug 31, When evaluating 2025-07-15T17:30 office tz, Then isBookable=false and reasonCodes contains QUIET_SEASON_HARD and nextAvailable.start=19:00. Given the same policies, When evaluating 2025-07-15T13:30 office tz, Then isBookable=true and reasonCodes=[]. Given date outside season (2025-10-01T17:30), Then isBookable=true if no other windows block, and 20:00–08:00 still blocks times within that range. Given two daily windows 06:00–08:00 and 20:00–22:00, When checking 07:30, Then isBookable=false and nextAvailable.start=08:00.
Deterministic API Output for Synchronous Checks
Given identical inputs (IDs, tz, timestamp, desiredDuration, policy versions), When calling checkBookable twice, Then responses are byte-identical (isBookable, reasonCodes order, nextAvailable, auditKey). Given isBookable=false due to multiple sources, When returning reasonCodes, Then codes are ordered deterministically by specificity: listing, agent, team, office, org, dayType, season. Given desiredDuration=45m and candidate start 2025-06-10T19:30 with hard quiet starting 20:00, When suggesting next available, Then nextAvailable.start is the earliest start that can fit 45m outside quiet windows (e.g., 2025-06-11T08:00 if no 45m window remains before 20:00). Given the same inputs and rule versions, When recomputing, Then auditKey is identical; when any policy version changes, auditKey changes.
Performance and Cache Behavior Under Load
Given warm cache, When executing 10,000 synchronous checks across typical contexts in 60s, Then p95 latency <= 50 ms, p99 <= 120 ms, and error rate = 0%. Given empty cache (cold start), When first evaluating a new policy context, Then p95 latency <= 150 ms; for subsequent identical checks, cacheHitRate >= 90% and p95 <= 40 ms. Given a policy version is published/updated, When evaluating affected contexts, Then cache entries for those contexts are invalidated within <= 1s and responses reflect the new ruleVersion immediately. Given cache TTL is reached, When evaluating, Then no stale decisions are returned and a fresh evaluation occurs within the stated latency budgets.
Audit Trail and Replayability
Given any decision, When persisted to the audit log, Then record includes auditKey, input payload (IDs, tz, timestamp, desiredDuration), resolved dayType, source chain (org/office/team/agent/listing), ruleVersion IDs per layer, decision time, reasonCodes, nextAvailable. Given an audit record, When replaying the decision using the recorded ruleVersion IDs, Then the result exactly matches the original (isBookable, reasonCodes order, nextAvailable, auditKey). Given policies have changed since the original decision, When querying the audit record, Then the record remains immutable and replay references historical rule versions without mutation.
Org Defaults & Bounds Configuration
"As a Compliance Admin, I want to set org-wide default quiet hours and allowable bounds by day type so that agents can personalize within safe limits."
Description

Deliver an admin console for Compliance Admins to define organization-wide default quiet hours and permissible bounds per day type and per office/market. Support hard blocks (cannot be overridden) and adjustable ranges that constrain agent choices. Include region-aware time zone settings, market-level holiday sources, seasonal profiles, and effective-date/versioning controls. Provide validation, preview calendars, bulk apply to offices/teams, and role-based access. Persist policies with metadata (owner, scope, version, change notes) and expose read-only endpoints for downstream services to retrieve current effective defaults/bounds.

Acceptance Criteria
Org Defaults by Day Type and Market
Given I am a Compliance Admin on the Org Defaults & Bounds page, When I select a specific market and a day type (weekday/weekend/holiday), Then I can set default quiet start/end times and permissible bounds for that selection. Given I select a specific office and a day type, When I save default quiet hours and bounds, Then the values persist and are retrievable for that office/day type. Given distinct configurations exist for different day types, When I switch between day types, Then the form displays and persists the correct values per day type without cross-over. Given I use Bulk Apply to apply the current market defaults/bounds to selected offices/teams, When I confirm the action, Then each selected scope saves the applied values with correct metadata (owner, scope, version, changeNotes).
Hard Blocks Non-Overridable
Given a Compliance Admin marks a weekend quiet window as Hard Block for a market, When an Agent attempts to set a personal window that allows bookings within that blocked interval, Then the system rejects the change and displays an error identifying the conflicting hard block. Given a Hard Block exists for a scope and day type, When downstream services request policies via the read-only endpoint, Then the response flags the interval as hardBlock=true and overrideAllowed=false. Given a Hard Block is active, When an attempt is made to schedule outside allowed hours via internal APIs, Then the request is rejected with a policy_violation error referencing the hard-blocked interval.
Adjustable Ranges Constrain Agent Choices
Given a permissible adjustable range for Quiet Start of 19:00–21:00 on weekdays in Office A, When an Agent selects a personal quiet start time earlier than 19:00 or later than 21:00, Then the system prevents saving and shows the allowed range. Given permissible ranges are set for Quiet Start and Quiet End, When an Agent selects times within both ranges with Quiet Start before Quiet End, Then saving succeeds and the selection is stored within the configured bounds. Given ranges are modified by a Compliance Admin, When an Agent next edits their personal window, Then the new ranges are enforced immediately without requiring agent re-authentication.
Time Zone and DST Correctness
Given Office B is configured with time zone America/Los_Angeles, When a Compliance Admin sets quiet hours 20:00–08:00, Then the UI and stored policy reflect those local times and the API exposes tzId=America/Los_Angeles with correct UTC offsets. Given a DST transition occurs in the office time zone, When viewing the preview calendar spanning the transition, Then quiet windows render at the correct local clock times before and after the transition and the API returns the correct offset-adjusted UTC instants for each boundary. Given two offices in different time zones share the same default hours, When their policies are retrieved, Then each office’s response includes its own tzId and times normalized to its local zone.
Seasonal Profiles and Versioning Controls
Given a seasonal profile "Summer" effective from 2025-06-01T00:00 to 2025-09-30T23:59 for Market M, When the asOf timestamp falls within that range, Then the read-only endpoint returns the "Summer" version as the effective policy for Market M. Given a new version with future effectiveFrom and changeNotes is created for Office O, When the effectiveFrom instant is reached, Then the new version becomes Active and the prior version is marked Superseded while remaining accessible in history with owner, scope, version, and changeNotes. Given an asOf parameter is provided to the endpoint, When requesting effective policies, Then the version returned matches the policy in force at that instant regardless of current time.
Role-Based Access Control
Given my role is Compliance Admin, When I access the Org Defaults & Bounds console, Then I can create, update, and delete policies within my organization. Given my role lacks Compliance Admin privileges, When I attempt to modify policies via the console or write APIs, Then the action is blocked and the API returns HTTP 403 with an authorization error code. Given a read-only role, When I call the effective policies endpoint, Then I receive policy data but cannot change it.
Read-Only Endpoint for Effective Policies
Given the endpoint GET /quiet-profiles/effective is called with organizationId, scope (office|market), scopeId, dayType, and optional asOf, When the parameters are valid, Then it returns 200 with current effective defaults and bounds including tzId, holidaySource, seasonalProfile, hard/adjustable flags, and metadata (owner, scope, version, effectiveFrom, effectiveTo, changeNotes). Given an invalid scopeId or dayType is provided, When the endpoint is called, Then it returns 400 with a clear error code and message. Given a holiday exists per the configured holidaySource for Market M on a date, When the endpoint is called for dayType=holiday and asOf that date, Then the response reflects the holiday quiet hours and bounds derived from that source.
Agent Personal Quiet Window Selector
"As an agent, I want to choose my quiet hours within my office’s bounds so that my schedule respects personal and local norms."
Description

Provide agents a guided UI (web and mobile) to select personal quiet-hour windows within office/team bounds for each day type. Include inline validation against bounds, conflict hints with existing commitments, and a live calendar preview of how booking availability will appear to requesters. Support multiple windows per day, copy/paste across days, and quick presets. Changes must version, timestamp, and propagate in near real time to the scheduling layer. Incorporate onboarding prompts and an optional weekly check-in to adjust for seasonal shifts within permitted ranges.

Acceptance Criteria
Personal Quiet Hours Within Admin Bounds (Per Day Type)
Given office/team bounds for weekday, weekend, and holiday are defined, When an agent opens the Quiet Window Selector on web or mobile, Then the UI displays the permitted time ranges per day type and disables selection outside those ranges. Given an agent selects a start or end time outside permitted bounds, When they attempt to save, Then inline validation blocks the save and shows a clear message naming the violated bound and day type. Given an agent sets times within bounds for a day type, When they save, Then the settings persist and are stored against the correct day type in the agent’s local timezone as displayed in the UI. Given organization defaults exist, When an agent has no personal windows set, Then the preview and availability calculations use org defaults for all day types. Given dates marked as holidays by the org calendar, When holiday quiet windows are configured, Then those windows apply only to holiday dates and do not affect weekdays/weekends.
Multiple Windows and Quick Presets
Given an agent is editing quiet windows for a day type, When they add windows, Then the system supports at least three non-overlapping quiet intervals per day and prevents overlaps with immediate inline feedback. Given an agent deletes or reorders quiet windows, When they confirm changes, Then the new set of intervals is saved exactly as shown on both web and mobile. Given quick presets are provided (e.g., Evenings, Nights, Office Standard), When a preset is selected, Then windows populate instantly and remain within office/team bounds; any preset that would violate bounds is disabled with an explanatory tooltip. Given a preset is applied, When the agent tweaks individual intervals, Then the customization overrides the preset for that day type without affecting other days.
Copy/Paste Across Days and Day Types
Given an agent selects a source day/day type, When they copy and paste to one or more target days/day types, Then all source intervals are applied to targets that are within bounds and skipped on targets that would violate bounds, with a per-target warning shown before save. Given targets already have intervals, When pasting, Then the agent is prompted to Merge or Overwrite; the chosen mode is applied consistently across all selected targets. Given a paste operation would newly block time where showings are already scheduled, When the agent previews or attempts to save, Then conflict hints display the number of impacted requests per target day with links to view details, and saving requires explicit confirmation. Given copy/paste is performed on mobile, When executed, Then the same safeguards, prompts, and outcomes occur as on web.
Live Calendar Preview Reflects Requester Availability
Given the agent adjusts any quiet interval, When start/end times are changed, added, or removed, Then the live calendar preview updates within 300ms to reflect bookable vs. quiet periods exactly as requesters would see them. Given the agent switches between weekday, weekend, and holiday views, When the selection changes, Then the preview context updates to the corresponding day type and legend remains accurate. Given the agent’s timezone is displayed, When viewing the preview, Then the timezone label is clearly shown and consistent across web and mobile. Given no personal windows exist for a day type, When previewing that day type, Then the preview reflects org defaults.
Versioning, Timestamping, and Near-Real-Time Propagation
Given an agent saves valid changes, When the save succeeds, Then a new version is created capturing version ID, agent ID, ISO-8601 timestamp with timezone, and a diff of added/removed intervals per day type. Given a save occurs, When propagation to the scheduling layer begins, Then changes are applied to the scheduling layer within 5 seconds at the 95th percentile and within 15 seconds at worst; a syncing indicator is shown until confirmation is received. Given propagation fails or exceeds 15 seconds, When the timeout is reached, Then the UI shows a retry banner and the previous effective configuration remains active without partial application. Given version records exist, When queried via internal API, Then the last 20 versions for the agent are retrievable with their diffs and timestamps.
Onboarding and Weekly Check-In Prompts
Given an agent has no personal quiet windows, When they first access the selector, Then an onboarding prompt guides them through choosing windows within bounds and requires acknowledgement if they opt to keep org defaults. Given weekly check-in is enabled, When the start of the agent’s configured week occurs, Then a non-blocking prompt offers to adjust quiet windows within permitted ranges, with options to Apply changes, Remind me next week, or Dismiss for 30 days. Given the agent applies changes via the weekly check-in, When confirmed, Then the same validation, versioning, and propagation rules apply as a normal save. Given the agent is on mobile, When the weekly check-in appears, Then the experience is optimized for mobile with identical options and validations.
Booking Guardrails & Enforcement
"As a scheduling user, I want the system to block after-hours requests and propose the next available time so that I can book compliantly without guesswork."
Description

Integrate quiet-hour enforcement into all scheduling entry points (listing page, API, QR flow, calendar modals). Before confirming a request, call the policy engine to block after-hours bookings and provide user-friendly explanations plus the next available alternatives. When conflicts involve multiple parties’ quiet hours, compute the intersection of permissible times and propose viable slots; if none exist, offer waitlist or exception request paths. Return structured reason codes to APIs, respect hard blocks, and log all denials for audit. Ensure performance SLAs suitable for real-time booking and graceful fallback if policy services are degraded.

Acceptance Criteria
Listing Page After-Hours Blocking
Given a user selects a timeslot outside permissible hours based on the listing’s local time zone and applicable day type (weekday/weekend/holiday) When they submit a booking from the listing page Then the booking is denied before confirmation with a clear message naming the blocking policy and prohibited window And the time picker highlights the conflicted timeslot And at least 5 compliant alternative times within the next 7 days are displayed
Multi-Party Quiet-Hour Intersection Resolution
Given applicable quiet-hour policies exist for office, team, agent, and listing with differing windows When the requested timeslot violates one or more policies Then the system computes the intersection of permissible windows across all parties And if the intersection contains slots, at least 3 viable alternatives within the next 14 days are presented And if the intersection is empty across the next 30 days, the user is offered a waitlist and an exception request path with pre-filled conflict details And the explanation enumerates all blocking parties and policies
Enforcement Across All Scheduling Entry Points
Given identical booking parameters and context for a listing When submitted via listing page, public API, QR quick-schedule flow, or calendar modal Then each entry point calls the policy engine with the same normalized payload and receives a consistent decision and reason_code And denial UIs in each entry point display the same human-readable explanation and alternatives And a nightly parity test of 100 randomized cases per entry point pair shows 100% decision agreement
Hard Blocks Take Precedence Over Personal Windows
Given a Compliance Admin has configured a hard quiet-hour block that overlaps an agent’s chosen personal window When a booking is attempted within the hard-blocked period Then the booking is denied regardless of the agent’s personal settings And the message indicates the denial is due to an organization hard block And the API returns reason_code HARD_BLOCK with the blocking policy_id
Structured Reason Codes and Alternatives in API Responses
Given an API client submits a booking that is denied by quiet-hour policies When the API responds Then the response contains a machine-parseable payload including decision=denied, reason_code in {QUIET_HOURS_BLOCK, INTERSECTION_EMPTY, HARD_BLOCK, POLICY_SERVICE_UNAVAILABLE}, policy_ids[], human_message, alternatives[] with ISO 8601 start/end timestamps, and decision_id And the HTTP status is 409 or 422 and is documented And the payload validates against the published JSON schema
Denial Audit Logging and Traceability
Given any booking denial due to quiet-hour enforcement When the decision is returned Then an audit record is persisted within 1 second containing decision_id, timestamp (UTC), requester (or anonymous), listing_id, entry_point, requested start/end, evaluated timezone, reason_code(s), policy_ids, decision_latency_ms, and correlation_id And the record can be retrieved via the audit API within 5 seconds of denial And audit records are retained for at least 180 days
Performance SLA and Graceful Degradation
Given normal operating conditions When evaluating a booking against policies Then policy-engine p95 latency is ≤ 200 ms and end-to-end decision p95 is ≤ 500 ms per entry point And under policy service timeout (>300 ms) or 5xx, a last-known-good policy snapshot (≤15 minutes old) is used And if no snapshot exists, the system fails closed by denying the booking with reason_code POLICY_SERVICE_UNAVAILABLE and offers waitlist/exception paths And all degraded decisions are logged with degradation_type and counted to ensure the weekly fallback rate is ≤ 0.1%
Holiday and Seasonal Calendar Integration
"As a Compliance Admin, I want quiet-hour profiles to automatically adjust for holidays and seasons by market so that we respect local norms without manual updates."
Description

Automatically classify days as weekday/weekend/holiday per office/market using trusted public-holiday sources and admin-defined custom calendars. Support seasonal quiet profiles (e.g., summer, peak season) with effective date ranges that the engine references during evaluation. Include DST transition handling, preview calendars for admins, and alerts when upstream calendars change. Provide fallback behavior if holiday data is unavailable and allow per-office overrides.

Acceptance Criteria
Auto Day Classification by Office and Market
Given Office A is mapped to market US and time zone America/New_York and a US public-holiday source is configured When the engine classifies 2025-07-04 for Office A Then dayType = "holiday" for Office A Given Office A has no custom override on 2025-07-05 (Saturday) When the engine classifies 2025-07-05 for Office A Then dayType = "weekend" Given 2025-07-07 (Monday) is not a holiday in the configured source When the engine classifies 2025-07-07 for Office A Then dayType = "weekday" Given Office B is mapped to market CA with a Canada holiday source When the engine classifies 2025-07-04 for Office B Then dayType ≠ "holiday" for Office B and may differ from Office A Given a datetime near local midnight for Office A (e.g., 2025-07-04T23:30-04:00) When the engine determines dayType Then the classification uses the office’s local calendar date (America/New_York)
Per-Office Custom Holiday Overrides
Given Compliance Admin adds a custom holiday named "Founders Day" on 2025-08-15 for Office A When the engine classifies 2025-08-15 for Office A and Office B Then Office A dayType = "holiday" and Office B dayType remains based on its source Given a public holiday exists on 2025-12-26 but Admin creates a per-office override marking it as "weekday" for Office A When the engine classifies 2025-12-26 for Office A Then Office A dayType = "weekday" and the per-office override takes precedence over public-holiday data Given Admin edits a custom holiday’s date or removes it When the change is saved Then preview and evaluation reflect the change within 5 minutes and all changes are audit-logged with user, timestamp, and affected offices
Seasonal Quiet Profiles with Effective Date Ranges
Given a seasonal quiet profile "Summer" for Office A is effective 2025-06-01 to 2025-08-31 with quiet hours 20:00–08:00 local When a booking request is evaluated at 2025-07-10T21:30 America/New_York Then the request is blocked as within seasonal quiet hours Given the base (non-seasonal) profile for Office A defines quiet hours 21:00–07:00 local When a booking request is evaluated at 2025-09-01T21:30 America/New_York Then the base profile applies and the request is blocked/allowed according to 21:00–07:00 Given overlapping seasonal profiles for the same entity When evaluating a booking Then the profile with the most recent effective start date within range is applied deterministically and documented in evaluation logs
DST Transition Handling in Quiet Hour Evaluation
Given Office A uses America/New_York with quiet hours 22:00–07:00 local and US DST starts on 2025-03-09 When evaluating bookings from 2025-03-08T22:00 to 2025-03-09T07:00 Then all times in that interval are treated as within quiet hours despite the skipped 02:00–03:00 clock hour Given US DST ends on 2025-11-02 When evaluating bookings during the repeated 01:00 hour (both EDT and EST instances) Then both occurrences are treated consistently per the 22:00–07:00 quiet window (i.e., both blocked) Given day classification is needed on DST-change dates When determining weekday/weekend/holiday Then classification uses the local calendar date regardless of offset change
Admin Calendar Preview and Immediate Feedback
Given a Compliance Admin opens the calendar preview for Office A for year 2025 When the public-holiday source and per-office custom overrides are applied Then each date shows dayType (weekday/weekend/holiday) calculated with the office’s time zone Given the Admin toggles a custom holiday on/off for a date When saving the change Then the preview updates within 2 seconds and indicates the classification source (public vs custom override) Given the Admin navigates months or filters by dayType = holiday When interacting with the preview Then results load within 1 second for cached months and within 3 seconds for first-time loads
Alerts on Upstream Holiday Source Changes
Given the upstream public-holiday provider adds, modifies, or removes holidays affecting an office’s market When the system ingests or detects a diff Then a notification is sent to Compliance Admins for affected offices within 24 hours including impacted dates, markets, and change type Given an alert is delivered When the Admin views it Then a link opens the calendar preview pre-filtered to the impacted dates and an audit record is created upon viewing Given the Admin acknowledges the alert When acknowledgment is recorded Then classifications for future dates are updated immediately and the alert is marked resolved
Fallback Behavior and Manual Overrides on Data Outage
Given the upstream holiday API is unreachable or returns errors for ≥ 60 minutes When classifying dates within the next 30 days Then the system uses last-known-good cached classifications and flags the source as "stale" in admin diagnostics Given there is no cache available When classifying any date during the outage Then classification falls back to weekday/weekend by day-of-week (no holidays) and an outage alert is sent to Compliance Admins Given the holiday provider recovers When fresh data is fetched Then future date classifications are refreshed within 60 minutes without retroactively altering past bookings, and per-office manual overrides continue to take precedence
Exception Overrides and Approval Workflow
"As a listing agent, I want to request a temporary quiet-hour exception with approvals so that high-priority showings can proceed when needed."
Description

Enable time-bound exceptions to quiet hours for specific listings, agents, teams, or offices with configurable approval routing (e.g., team lead, Compliance Admin). Requesters specify scope, reason, and duration; approvers receive notifications and can approve, modify, or decline. Approved exceptions create temporary rule layers with explicit expiration and automatic rollback. All actions are captured for audit, and enforcement must reflect exceptions immediately. Include SLA reminders, escalation rules, and visibility of active exceptions in calendars and listing pages.

Acceptance Criteria
Submit Exception Request with Required Fields
Given a requester with permission opens the Exception Request form When the requester selects a scope (listing, agent, team, or office), provides a reason, and enters a start and end date/time Then the form validates that scope, reason, start, and end are present and that end is after start And the form prevents submission and displays inline, field-specific errors if validation fails And on valid submission the system creates a request with a unique ID and status "Pending Approval" and records the routing policy to be used
Configurable Approval Routing and Decision Actions
Given organization approval routing rules exist for each scope type (e.g., Team → Compliance Admin for team scope; Compliance Admin only for office scope) When a new exception request is submitted Then approvers are assigned per the applicable routing rule (sequential or parallel as configured) And each assigned approver receives a notification within 60 seconds And an approver can Approve as requested, Modify (reduce duration and/or narrow scope), or Decline with a required reason And on Modify the request details update and the requester is notified within 60 seconds And on Approve the request status becomes "Approved" with the effective window; on Decline the status becomes "Declined"
SLA Reminders and Escalation Timers
Given reminder SLA and escalation SLA are configured for pending approvals When a request remains pending beyond the reminder SLA for its current approver(s) Then the system sends reminder notifications to those approver(s) and logs the reminder event When a request remains pending beyond the escalation SLA Then the system escalates to the next-level approver per routing rules, notifies all impacted parties, and logs the escalation And SLA timers reset appropriately when the approver changes or the request is modified
Immediate Enforcement of Approved Exceptions
Given an exception request is Approved with a defined start and end time When approval occurs and the current time is within the approved window Then scheduling enforcement updates within 10 seconds to allow bookings otherwise blocked by quiet hours for the specified scope And bookings attempted within the exception window succeed; bookings attempted outside the window are blocked per quiet-hour policy When an approver modifies an Approved exception (shorter duration or narrower scope) Then enforcement reflects the change within 10 seconds
Active Exception Visibility in Calendars and Listings
Given a user with view access opens a calendar or listing page within the scope of an active exception When the page loads Then an Active Exception indicator is visible and shows scope, reason, start, and end time And the time slots enabled by the exception are visually distinguished in the booking UI And hovering/tapping the indicator reveals details including who approved and when
Automatic Expiration and Rollback
Given an Approved exception is active When the current time passes the exception end time Then the exception deactivates within 10 seconds and scheduling reverts to the underlying quiet-hour rules And the Active Exception indicator is removed from calendars and listing pages And existing bookings created under the exception remain intact; new bookings outside quiet-hour rules are blocked When an approver cancels an active exception before its end time Then the same rollback occurs within 10 seconds
Comprehensive Audit Logging and Retrieval
Given audit logging is enabled by default When any action occurs on an exception (create, modify, approve, decline, reminder sent, escalation, activation, expiration, cancellation) Then the system records an immutable audit entry including request ID, actor, role, timestamp (UTC), action type, prior and new values, scope, and rationale where applicable And Compliance Admins can filter audit logs by date range, actor, scope type, scope ID, status, and action type And the audit trail for a specific request is viewable in chronological order from the request detail view
Compliance Audit Log and Reporting
"As a broker-owner, I want reports on blocked after-hours attempts and overrides by office and team so that I can monitor compliance and improve policies."
Description

Capture immutable logs of policy configurations, evaluations, blocked booking attempts, and exceptions with timestamps, actors, scopes, and decision outputs. Provide dashboards and exports segmented by office/team/listing to reveal after-hours demand, policy impact, and override rates. Include filters by day type, season, and market, with redaction controls for PII. Offer CSV export and API access for BI tools, define retention policies, and align storage with regional data regulations.

Acceptance Criteria
Immutable Policy Configuration Change Logging
Given a user with Compliance Admin role creates, updates, or deletes a quiet-hour policy at any scope (org, office, team, agent, listing) When the change is saved Then an immutable audit log entry is created with: ISO8601 UTC timestamp, actor ID and role, request ID, market/region, scope type and identifier, policy ID and new version, change type (create/update/delete), previous values hash, new values hash, and a structured diff And the entry is sealed with a cryptographic hash and chain index linking to the prior entry And any attempt to alter or delete the log entry via UI or API returns 403 and the stored hash and chain index remain unchanged And the log entry becomes queryable in the audit viewer and API within 5 seconds of the change being saved
Decision and Blocked Attempt Logging
Given a booking request is submitted that intersects a quiet-hour window or requires evaluation against quiet-hour policies When the policy evaluation runs Then a decision log is written with: booking request ID, evaluated timestamp (UTC), listing ID, office/team/agent scope path, day type (weekday/weekend/holiday), season, market, consulted policy ID(s) and version(s), effective windows, evaluation outcome (allow/deny/override-required), and reason codes And if the outcome is deny, a blocked-attempt log is created with attempted start/end, requester role, channel (web/app/QR), and block reason And decision and blocked-attempt logs share a correlation ID with the booking request and appear in aggregates within 1 minute And 99% of decision logs are persisted within 200 ms of the decision being made
Override/Exception Approval Logging
Given a manager or authorized role attempts to grant an exception to permit a booking during quiet hours When the override is approved Then an audit log entry records: approver ID/role, requester ID/role, linked blocked-attempt ID, scope, override window (start/end), rationale text, timestamp, expiration, booking ID (if created), and policy version at approval time And subsequent booking creation references the override ID in its decision log And revocation or expiry of the override writes an audit entry updating status to revoked/expired with timestamp and actor (or system) And unapproved override attempts are denied and logged with reason insufficient-permission
Role-Based PII Redaction Controls
Given a user without PII access permission views audit data via UI, CSV export, or API When records containing PII fields (name, phone, email) are returned Then PII values are redacted using irreversible tokens and/or partial masking per policy, while non-PII fields remain intact And users with PII permission can toggle redaction off in-session; the toggle action is logged with user, time, scope, and stated reason And redaction is applied consistently across dashboards, detail views, CSV, and API for the same filter set And attempts to request unredacted API fields without permission return 403 with error code PII_FORBIDDEN And exports generated for non-PII roles contain no unredacted PII
Analytics Dashboards with Segmentation & Filters
Given a Compliance Admin opens the Compliance Analytics dashboard When they apply segmentations by office/team/listing and filters for day type, season, market, and date range Then charts and tables display after-hours demand, policy impact (denied vs allowed), and override rates computed from audit logs for the selected segments And results update within 2 seconds at the 95th percentile for 12 months of aggregated data up to 1,000,000 events And selecting any chart point drills down to decision records with correlation IDs and linked booking/override details And counts in the dashboard match CSV/API exports for the same filter set within ±0.1%
CSV Export & Reporting API Access
Given a user requests an export for a specified filter set When the export job is created Then an asynchronous job is queued with status (queued/running/completed/failed), and the user is notified on completion with a signed URL valid for 24 hours And the CSV includes stable column headers, UTC timestamps, a schema version, and preserves server-side sorting And exports up to 1,000,000 rows complete within 5 minutes at the 95th percentile And reporting API endpoints provide the same filters, cursor-based pagination, sorting, and redaction rules as the UI, returning within 1 second at the 95th percentile for pages up to 5,000 records And API access requires OAuth2 with scope audit.read; unauthorized or insufficient scope requests return 401/403
Data Retention & Regional Storage Compliance
Given the organization operates across multiple markets with different data residency requirements When audit logs are stored and retention policies are enforced Then logs for EU markets are stored exclusively in EU-designated storage and are not exported cross-region unless an explicit cross-region permission is granted And default retention is 24 months with market-level overrides; purge jobs run daily and remove expired records within 7 days And each purge operation creates an audit entry including count purged, time window, market, and actor=system And legal holds can be applied per scope to suspend purge; hold placement and removal are logged, and held records are excluded from deletion And attempts to retrieve purged records by ID return 404 with error code RECORD_PURGED

Smart Buffers

Automatically calculates realistic travel buffers between showings using distance, traffic patterns, and parking time. Blocks back‑to‑back slots that risk lateness and suggests the earliest feasible alternatives. Reduces stress, no‑shows, and apology texts while keeping agendas achievable.

Requirements

Real-time Travel Time Engine
"As a listing agent, I want accurate, real-time ETAs between showings so that buffers reflect true conditions and I arrive on time."
Description

Implements a backend service to compute door-to-door travel time between sequential showings using distance, live traffic, and historical congestion patterns; supports driving, walking, and mixed modes; includes provider abstraction to integrate multiple mapping APIs (Google, Apple, Mapbox) with rate-limit aware caching and fallback; exposes ETA, variance/confidence, and recommended buffer via internal API to the scheduler; updates ETAs in near real time as conditions change and triggers re-evaluation of upcoming slots; designed to scale to thousands of concurrent tours with p95 < 300 ms per request; logs telemetry for accuracy monitoring and continuous model tuning.

Acceptance Criteria
Urban rush-hour driving ETA and buffer computation
- Given two sequential showings 6–10 km apart in the same city during defined rush-hour windows, When the scheduler requests /eta with mode=auto, Then the engine returns fields: etaMinutes (>0), varianceMinutes (>=0), confidence (0–1), recommendedBufferMinutes (>=5), mode="driving", providerUsed, requestId. - Given 200 predefined urban rush-hour routes with recorded ground truth arrivals, When ETAs are computed, Then driving MAE <= 4 minutes and p90 absolute error <= 8 minutes. - Given any single request in steady state, When processed, Then total latency p95 <= 300 ms and external-maps latency p95 <= 200 ms for this scenario. - Given an ETA where predicted lateness probability > 0.2 for a back-to-back slot, When the engine returns recommendedBufferMinutes, Then recommendedBufferMinutes is sufficient to reduce predicted lateness probability to <= 0.1 and is included in the response.
Mixed-mode door-to-door (drive + park + walk) handling
- Given a tour step where the destination requires off-street parking and a 3–8 minute walk, When /eta is called with mode=auto, Then the engine returns a legged plan including drive, parkingSearchMinutes, and walk legs, and the door-to-door etaMinutes includes all legs. - Given downtown test cases with known average parking search times, When computing mixed-mode ETAs, Then final MAE <= 5 minutes and walking leg p90 error <= 2 minutes. - Given the legged response, When returned, Then fields include: legs[].type in {drive, park, walk}, legs[].etaMinutes, total etaMinutes, varianceMinutes, confidence, and recommendedBufferMinutes that accounts for multi-leg uncertainty. - Given an impossible mixed-mode route (e.g., pedestrian-only zone with no legal parking within policy radius), When requested, Then the engine returns errorCode="ROUTE_UNAVAILABLE" within 300 ms and suggests next feasible arrival time via earliestFeasibleAt.
Provider abstraction with rate-limit-aware caching and automatic fallback
- Given primary provider returns HTTP 429 or times out (> 800 ms), When /eta is requested, Then the engine retries against a secondary provider within the same request and returns a successful ETA for >= 99.5% of such cases. - Given identical origin/destination/departure/mode parameters within cache TTL, When multiple requests arrive, Then only the first results in an external API call and subsequent requests are served from cache (cacheHit=true) with hit rate >= 60% under synthetic mixed-traffic load. - Given provider QPS limits configured (e.g., 100 QPS), When sustained traffic approaches the limit, Then the engine self-throttles, maintains external 429 rate < 0.5%, and overall error rate < 1%. - Given a cached entry older than stalenessThreshold but within maxCacheAge, When providers are degraded, Then the engine may serve stale=true responses and includes stalenessSeconds metadata.
Near real-time ETA updates triggering scheduler re-evaluation
- Given an upcoming slot within the next 90 minutes, When live traffic causes ETA to change by >= 3 minutes or lateness probability crosses 10%, Then the engine emits an update event to the scheduler within 5 seconds of detection. - Given an emitted update, When processed end-to-end, Then 95% of scheduler callbacks complete re-evaluation within 10 seconds and risky back-to-back slots are marked unavailable with blockReason="travel_risk". - Given a recalculated plan, When alternatives exist, Then the engine provides earliestFeasibleAlternatives[0..2] with startTimes and predicted on-time confidence >= 0.9. - Given successive updates for the same leg, When delivered, Then version increases monotonically and only the latest version is active in the scheduler.
Performance and scalability under load
- Given a production-like load of 1,500 RPS across 2,000 concurrent tours and mixed modes, When sustained for 30 minutes, Then API latency p95 <= 300 ms, p99 <= 600 ms, and error rate <= 0.5%. - Given autoscaling policies, When load ramps from 200 to 1,500 RPS in 5 minutes, Then instance pool scales within 2 minutes without breaching p95 > 300 ms for more than 60 seconds. - Given external provider quotas, When traffic spikes, Then the system sheds non-critical recomputations first, preserving success rate >= 99% for on-demand ETA requests. - Given memory and CPU budgets per instance, When at target load, Then average CPU <= 70% and memory headroom >= 20%.
Telemetry and observability for accuracy and tuning
- Given any /eta response, When logged, Then telemetry includes: requestId, tourId, time, geohashOrigin/Destination (precision 6), mode, providerUsed, cacheHit, externalLatencyMs, totalLatencyMs, etaMinutes, varianceMinutes, confidence, recommendedBufferMinutes, version, and errorCode (if any), with PII (raw addresses) excluded. - Given deployed dashboards, When observed, Then they expose p50/p95/p99 latency, success rate, provider error/429 rates, cache hit rate, and accuracy MAE/p90 by city and mode. - Given SLA breaches (p95 > 300 ms for 5 minutes or provider 429 rate > 1%), When detected, Then alerts fire to the on-call channel within 2 minutes with runbooks linked. - Given sampling configuration, When enabled, Then payload/body sampling rate is <= 10% and never includes full raw addresses.
ETA accuracy and confidence calibration using ground truth
- Given a rolling window of the last 5,000 completed legs with ground-truth arrival times, When evaluated, Then driving MAE <= 4 minutes, walking MAE <= 2 minutes, and mixed-mode MAE <= 5 minutes. - Given confidence scores, When binned, Then for confidence >= 0.8 at least 75% of trips land within etaMinutes ± varianceMinutes; Brier score <= 0.20 over the window. - Given rush-hour vs off-peak segments, When compared, Then varianceMinutes is higher during rush hour and calibration curves show monotonic reliability. - Given underperforming city-mode segments, When detected, Then an automatic flag is recorded and model tuning jobs are queued within 24 hours.
Conflict Guard & Auto-Reschedule
"As a listing agent, I want the system to block risky back-to-back slots and propose the earliest feasible alternatives so that my schedule remains realistic and professional."
Description

Prevents booking of back-to-back time slots that exceed lateness risk thresholds based on computed ETAs and configured buffers; hard-blocks infeasible slots and presents the earliest feasible alternatives within the agent’s availability and listing access windows; continuously monitors active tours and, when delays arise, suggests one-tap bulk adjustments that preserve sequence constraints and required buffers; coordinates notifications to buyers’ agents, sellers, and team members, capturing acceptance where required; ensures no overlaps, adheres to lockbox/HOA access windows, and honors do-not-disturb quiet hours; writes changes to the core schedule with idempotent operations and rollback on failure.

Acceptance Criteria
Hard-Block Infeasible Back-to-Back Booking
Given an agent attempts to book Appointment B directly after Appointment A And computed ETA from A to B plus configured buffer exceeds B’s start time or lateness risk >= configured threshold When the booking action is submitted Then the system prevents creation of Appointment B And displays a clear error explaining the violated constraint(s) and the estimated lateness And no tentative hold is placed on the time slot
Suggest Earliest Feasible Alternatives
Given a booking attempt is blocked due to insufficient buffer or lateness risk And agent availability, listing access windows, and quiet hours are known When alternatives are requested Then the system presents the earliest feasible alternative that satisfies buffers, access windows, and quiet hours And additional feasible options (if any) are listed in ascending start time And selecting an alternative books it without creating overlaps or violating constraints
Real-Time Delay Auto Bulk Adjustment
Given an active tour with a defined sequence (A -> B -> C) and required buffers And live ETA indicates B will be late beyond the configured threshold When the system detects the delay Then it proposes a one-tap bulk adjustment that shifts B and subsequent appointments while preserving sequence and required buffers And the proposal respects listing access windows and quiet hours And applying the proposal updates all impacted appointments atomically and logs before/after times
Coordinated Notifications & Acceptance Capture
Given a reschedule affects buyers’ agents, sellers, and team members with defined notification and acceptance rules When the reschedule is applied Then notifications are sent to all affected parties via their configured channels within 60 seconds And recipients requiring acceptance receive actionable requests and their responses (accept/decline) are recorded with timestamp and identity And affected appointments remain Pending until all required acceptances are captured; upon acceptance they move to Confirmed And if any required recipient declines, the system notifies the agent and reopens alternative suggestion flow for that appointment
Constraint Validation: Overlaps, Access Windows, Quiet Hours
Given any booking or reschedule action is evaluated When constraints are checked Then there are zero time overlaps on the agent’s schedule for all impacted appointments And all times fall within each listing’s lockbox/HOA access windows And all times comply with do-not-disturb quiet hours And if constraints cannot be satisfied, the action is blocked and the specific violated constraints are displayed
Idempotent Writes and Rollback on Failure
Given a reschedule plan affecting N appointments When the system executes the update Then the operation is idempotent: retrying with the same inputs produces the same schedule state and no duplicate notifications And if any write or notification step fails, all changes are rolled back and the prior schedule state is restored And the failure is logged with a correlation ID and the agent is informed of the rollback
Buffer Policy & Agent Preferences
"As a broker-owner, I want configurable buffer policies and overrides so that teams can align to standards while accommodating local realities."
Description

Provides a configurable policy layer for buffer generation, including global minimums, time-of-day multipliers, property-type modifiers (e.g., condos vs. single-family), weekend/peak-hour rules, and late-risk thresholds; supports office-level defaults with agent-level overrides and team templates; enables 'soft suggestions' vs. 'hard blocks' behavior, manual overrides with reason codes, and exception windows; exposes a lightweight UI in TourEcho’s scheduling flow and a policy API used by the Travel Time Engine and Conflict Guard; persists audit-ready change history to support compliance and coaching.

Acceptance Criteria
Office Defaults, Team Templates, and Agent Overrides
- Given office default policy values and no agent override, When scheduling a new showing, Then the effective policy equals the office defaults for all parameters. - Given a team template is assigned and an agent sets an override for propertyTypeModifiers.condo, When computing the effective policy, Then the condo modifier equals the agent value and non-overridden parameters equal the team template. - Given conflicting values across levels, When calling GET /policy/v1/effective, Then each parameter includes value, source (agent|team|office), and precedence agent > team > office is applied. - When a policy change is saved, Then Travel Time Engine requests reflect the new effective policy within 60 seconds. - Automated tests validate the precedence matrix across all exposed parameters with 100% pass rate.
Time-of-Day, Weekend, and Property-Type Adjustments in Buffer Calculation
- Given base travel time 20 min at 17:30, time-of-day multiplier 1.5 for 17:00–19:00, weekend multiplier 1.2 on Sat/Sun, condo modifier +5 min, and parking +8 min, When the date is Saturday, Then buffer = ceilTo5(20*1.5*1.2) + 5 + 8 = 41 minutes. - Rounding rule: ceilTo5 rounds any partial minute up to the next 5-minute increment. - Given a missing modifier for a property type, Then default adjustment is 0 minutes. - Unit tests cover ≥100 sample routes with mean absolute error ≤ 1 minute between expected and computed buffer.
Late-Risk Thresholds Drive Soft Suggestions vs Hard Blocks
- Given hardBlockThreshold=20% and softSuggestRange=10–19.99%, When predicted lateness probability ≥ 20%, Then the slot is hard-blocked with explanatory label and the earliest feasible alternative is suggested. - When predicted lateness probability is within 10–19.99%, Then the slot remains selectable with a “Soft Suggestion: add buffer” banner. - When predicted lateness probability < 10%, Then no warning or block is shown. - Decision evaluation plus suggestion lookup completes with P50 ≤ 200 ms and P95 ≤ 400 ms.
Manual Override, Reason Codes, and Exception Windows
- Given a hard-blocked slot, When an agent with permission "Can override buffer policy" selects Override, Then a reason code is required and an optional note (≤ 280 chars) can be added. - On save, Then the override applies to the selected slot only, and an audit record captures actorId, timestamp, reasonCode, note, and policy snapshot within 1 second. - Given an active exception window for the agent, When window conditions are met, Then hard blocks downgrade to soft suggestions during the window. - When no reason code is provided, Then the override action is rejected with a validation error.
Scheduling UI Indicators and Actions
- When selecting adjacent appointments, Then a buffer badge shows minutes, risk color (green<10%, amber 10–19.99%, red≥20%), and a lock icon if blocked. - On hover/tap, Then a details panel lists factor breakdown: base, time-of-day, weekend, property-type, parking, rounding, with each value displayed. - On "Earliest Feasible" action, Then a new start time suggestion is returned within 2 taps/clicks and ≤ 500 ms backend time. - UI indicators meet WCAG AA contrast and all actions are keyboard accessible.
Policy API Contracts and Caching
- GET /policy/v1/effective?agentId={id} returns 200 with payload containing version, etag, issuedAt, ttlSeconds≤60, and per-parameter value and source. - Travel Time Engine includes If-None-Match with the etag; server returns 304 when unchanged. - POST /policy/v1/evaluate returns deterministic output {decision, reasons[], suggestedAlternatives[]} for a provided schedule payload. - P95 latency ≤ 200 ms for GET and ≤ 300 ms for POST under 50 RPS in staging; error rate < 0.1%.
Audit-Ready Policy Change History
- Every policy mutation (create/update/delete/template assignment/override) produces an immutable record with before, after, actorId, role, ip, userAgent, timestamp (UTC ISO-8601), reasonCode, and scope (office/team/agent). - Records are hash-chained; GET /policy/v1/audit/verify returns status "valid" when the chain is intact. - History endpoint supports filters by actor, parameter, and date range and exports CSV with SHA-256 checksum. - Retention ≥ 24 months; legal erasure requests mark records as redacted with retentionJustification and purgeTicketId.
Calendar Sync with Travel Blocks
"As an agent, I want travel buffers written to my external calendar so that my day reflects true availability across all tools."
Description

Integrates two-way with Google and Microsoft 365 calendars to ingest existing events, detect conflicts, and write travel buffers as busy blocks; updates external calendars when Smart Buffers adjusts an itinerary, preserving attendees and notes; supports per-calendar selection, time zone handling, and working hours; degrades gracefully on sync errors with retry backoff and user-facing alerts; ensures principle of least privilege via OAuth scopes and allows per-user revocation; normalizes external updates into TourEcho’s schedule in near real time to keep a single source of truth.

Acceptance Criteria
Write Travel Buffers as Busy Blocks to External Calendars
Given a connected Google or Microsoft 365 calendar is designated for writes When Smart Buffers computes a 17-minute travel time between two confirmed showings Then a Busy event titled "Travel: <from address> → <to address>" is created on the designated external calendar with start and end matching the computed window to the nearest minute Given a travel block previously created by TourEcho exists on the external calendar When Smart Buffers recomputes the buffer due to a schedule change Then the same external event is updated in place (same event ID) without creating duplicates Given a travel block created by TourEcho When viewed in the external calendar Then its visibility is Busy, its description contains a link back to the TourEcho itinerary, and it is tagged with an extended property indicating "tourecho:managed=true" Given a travel block created by TourEcho When a user adds attendees to it in the external calendar Then TourEcho does not remove user-added attendees on subsequent updates and does not add attendees itself
Preserve Attendees and Notes on Showing Event Updates
Given a showing event with external attendees and a description exists on the designated write calendar When Smart Buffers adjusts the showing start time by up to 30 minutes Then TourEcho updates only the start/end times and location, preserving existing attendees and the description content verbatim Given a showing event updated by TourEcho When inspected in the external calendar Then the event organizer remains unchanged and any meeting link present remains intact Given a showing event managed by TourEcho When the time is adjusted in the external calendar by the user Then TourEcho ingests the change and updates its internal schedule, triggering buffer recalculation within 60 seconds
External Conflict Detection and Alternative Suggestions
Given an external event overlaps a proposed showing or its travel buffer by at least 1 minute on any selected read calendar When the user attempts to schedule the showing Then TourEcho blocks confirmation and displays a conflict message naming the external event and time range Given a conflict is detected When the user requests alternatives Then TourEcho suggests the earliest three feasible start times within the user’s working hours that maintain non-overlapping buffers, with the first option starting no earlier than now plus the required travel buffer Given the user selects a suggested alternative When confirmed Then TourEcho writes the showing and associated travel blocks to the external calendar without conflicts
Per-Calendar Selection, Least-Privilege OAuth, and Revocation
Given a user connects Google or Microsoft 365 via OAuth When scopes are requested Then only read access is requested for selected read-only calendars and read/write access only for the single calendar the user designates for writes Given calendars are listed in settings When the user toggles selection and saves Then TourEcho reads from only the selected calendars and writes exclusively to the designated write calendar within 10 seconds of save Given the user clicks Disconnect in settings When revocation is confirmed Then OAuth tokens are revoked, webhook subscriptions are removed, and syncing stops within 60 seconds; no further reads or writes occur until reconnected Given revocation occurs while outbound writes are queued When revocation completes Then the outbound queue is purged and the user sees a dismissal alert indicating the count of unsynced items
Time Zones, DST, and Working Hours Respect
Given the user’s profile time zone is America/Los_Angeles and a listing is in America/Denver When a showing is scheduled at 3:00 PM listing local time on a DST transition day Then the external calendar events display 3:00–3:30 PM Mountain Time and the travel blocks align without a one-hour drift Given external events originate from different time zones When rendered in TourEcho Then all times are normalized to the user’s profile time zone with correct UTC offsets Given working hours are set to Mon–Fri 09:00–18:00 local When suggesting alternatives Then no suggestions fall outside working hours unless the user explicitly overrides for that action
Graceful Degradation, Retry Backoff, and Alerts
Given the external API returns HTTP 429 or 5xx When syncing Then TourEcho retries with exponential backoff and jitter starting at 1 minute and capping at 15 minutes, for up to 24 hours before marking sync as degraded Given sync is degraded for more than 10 minutes When the user opens TourEcho Then an in-app banner appears within 5 seconds and a single email is sent with the last error summary and next retry time Given outbound changes occur during degradation When connectivity resumes Then all queued changes are applied in creation order without duplication, and the "Last successful sync" timestamp updates to the most recent fully successful operation
Near Real-Time Ingestion and Single Source of Truth
Given calendar push notifications are enabled When an external event in a selected calendar is created, updated, or deleted Then TourEcho reflects the change within 60 seconds; if push is unavailable, polling applies the change within 5 minutes Given an external event mapped to a TourEcho showing is deleted externally When the deletion is ingested Then the showing is marked removed in TourEcho, affected buffers are recalculated, and the user receives a notification Given duplicate or out-of-order notifications arrive When processing external changes Then idempotency keys ensure each change is applied once and in order, preserving consistency between TourEcho and the external calendar
Parking & Access Time Model
"As an agent, I want parking and access time included in buffers so that dense urban showings don’t make me late."
Description

Augments travel estimates with realistic parking and access durations based on neighborhood density, building type, and property-specific metadata (e.g., street parking, garage, concierge, elevator, lockbox location); learns from prior tours and agent feedback to refine estimates; allows per-listing configuration and temporary event-based overrides (e.g., game day traffic); surfaces access instructions on the itinerary and factors them into buffer calculations; includes guardrails for rural areas where parking time is negligible; exposes model components and confidence to the Explainability layer.

Acceptance Criteria
Urban High-Density: Street Parking, Concierge Check-In, Elevator Access
Given a listing with metadata neighborhood_density=urban_high, building_type=highrise, parking=street, concierge=true, elevator=true, lockbox_location=concierge_desk, and a prior showing with travel_time_minutes=T When Smart Buffers compute the buffer for the next showing Then parking_access_minutes is between 8 and 15 inclusive and includes subcomponents: parking_search, concierge_checkin, elevator_wait, elevator_travel, lockbox_retrieval And total_buffer_minutes >= T + parking_access_minutes And any back-to-back slot with gap < total_buffer_minutes is blocked And the system suggests the earliest feasible alternative within +/- 2 minutes of the total_buffer_minutes offset for the affected slot
Rural Guardrail: Negligible Parking/Access Time
Given neighborhood_density=rural, parking in {off_street, driveway}, building_type=single_family, concierge=false, elevator=false When computing parking_access_minutes Then parking_access_minutes <= 2 And total_buffer_minutes = travel_time_minutes + parking_access_minutes And no slot is blocked due solely to parking_access_minutes > 2 (since it does not exceed the guardrail)
Per-Listing Parking & Access Configuration Overrides
Given an agent configures listing-level settings: override_active=true, parking_access_minutes=6, and access_steps defined When buffers are computed for that listing during the override period Then the model uses 6 for parking_access_minutes instead of the learned/default estimate And Explainability shows component listing_override=6 with source=manual And an audit entry is created with user, timestamp, old_value, and new_value And when override_active=false, the model reverts to the learned/default estimate on the next computation
Event-Based Temporary Override (e.g., Game Day)
Given an event override exists for zone=Stadium_A within window W_start–W_end with extra_parking_minutes=10 When a showing is scheduled for an affected listing during this window Then parking_access_minutes is increased by 10 And Explainability shows event_override=10 with source=Stadium_A And after W_end, the extra_parking_minutes is no longer applied without manual change
Model Learning from Tour Outcomes and Agent Feedback
Given at least 5 completed showings for a listing with recorded actual parking_access durations and agent feedback submitted When the nightly model update runs Then the per-listing expected parking_access_minutes is updated such that the mean absolute error (MAE) against the last 5 actuals decreases by at least 10% versus the prior estimate And the updated estimate is used for buffer calculations on the next scheduling action for that listing And Explainability shows learning_adjustment with delta_minutes and confidence in [0.0, 1.0] And if a listing-level override is active, the learning update is stored but not applied until the override is disabled
Itinerary Surfacing of Access Instructions
Given a scheduled showing has computed access steps (e.g., park, retrieve lockbox, concierge check-in, elevator to floor) When the agent views the itinerary for that showing Then the itinerary displays step-by-step access instructions in order with estimated minutes per step And the sum of step estimates equals parking_access_minutes within +/- 1 minute (rounding allowed) And if metadata is missing, default steps are inserted and flagged as assumed with an info indicator
Explainability: Parking/Access Component Breakdown and Confidence
Given a parking & access estimate has been computed for a showing When the user opens the Explainability panel or calls the /explainability endpoint for that showing Then the system returns components with minutes and sources: parking_search, access_steps, building_type_adjustment, density_adjustment, guardrail_adjustment (if any), listing_override (if any), event_override (if any), learning_adjustment (if any) And a confidence score in [0.0, 1.0] is returned for the overall parking_access estimate and for each component And the sum of component minutes equals parking_access_minutes within +/- 1 minute And missing metadata fields are explicitly listed under assumptions with defaults used
Tour Route Optimization
"As an agent, I want optimized tour sequences that minimize travel while honoring constraints so that I can show more homes with less stress."
Description

Optimizes the sequence of multiple showings to minimize total travel and parking time while respecting fixed appointment windows, required buffers, and client/agent availability; offers 'Best Route' suggestions and what-if scenarios before confirming a tour; supports constraints such as must-see order, maximum tour duration, and time-boxed gaps (e.g., lunch); produces a turn-by-turn agenda with ETAs and buffer blocks; recalculates on the fly if a property cancels or the client runs late, preserving as many commitments as possible.

Acceptance Criteria
Best Route Suggestion Honors Windows, Buffers, Availability
Given a tour with 4–10 properties each with fixed appointment windows, agent/client availability, and required buffer rules When the agent requests “Best Route” Then the system returns a sequenced route that satisfies all fixed windows, availability constraints, and buffers with zero violations And the computation completes in ≤3 seconds for up to 10 stops on a broadband connection And the output shows total drive time, parking time, buffer time, and projected tour end time And any property that makes the route infeasible is clearly flagged with reason codes (e.g., window conflict, max duration exceeded)
Buffer-Aware Slot Blocking and Alternative Proposals
Given a candidate pair of back-to-back showings whose computed travel + parking exceeds the available gap When the agent attempts to schedule them consecutively Then the system blocks the placement and displays the required buffer gap in minutes And it suggests the earliest 3 feasible alternative start times/slots that satisfy buffers and windows, if available, otherwise explains unavailability And each suggestion includes updated ETA, buffer size, and lateness risk (0%, <5 minutes, >5 minutes) And suggestions generate in ≤2 seconds
Must-See Order, Max Duration, and Time-Boxed Gaps Respected
Given a tour with a user-specified must-see order subset, a maximum tour duration, and a protected time-boxed gap (e.g., lunch 12:00–12:30) When the agent optimizes the route Then the final sequence preserves the relative order of the must-see stops And the total scheduled time (travel + parking + showings + buffers) does not exceed the maximum duration And the protected gap remains unbooked and buffers do not overlap it And if constraints render the tour infeasible, the system returns no route and provides a ranked list of offending constraints and suggested relaxations
Turn-by-Turn Agenda with ETAs and Buffer Blocks Generated
Given an optimized route is accepted When the agent views or exports the agenda Then each stop includes address, planned arrival ETA, showing duration, buffer start/end, and next-leg travel time And the agenda renders turn-by-turn directions with step count and leg ETAs, and deep-links to default maps on mobile And exporting to calendar creates individual events for each stop and buffer with correct timezone and reminders And the agenda generation and export complete in ≤1 second for up to 10 stops
Disruption Recalc on Cancellation or Late Client Preserves Commitments
Given a confirmed tour in progress with some completed stops, some locked commitments, and live delay input (e.g., client 15 minutes late) or a property cancellation When the agent triggers Recalculate Then the system re-optimizes the remaining itinerary starting from “now” without altering completed stops and without moving locked commitments or hard windows And among feasible options it minimizes total travel time and the number of changes to previously confirmed times And it surfaces up to 3 adjustment options (e.g., skip, swap, delay) with impact on ETA, buffers, and likelihood of lateness And recalculation completes in ≤5 seconds and sends updated calendar invites for any changed times
Pre-Confirmation What-If Comparison of Route Variants
Given a draft tour with at least two what-if variants (e.g., different start time, include/exclude a property, alternate parking estimates) When the agent compares variants Then the system displays side-by-side metrics: total tour duration, drive time, parking time, total buffer minutes, number of risk-of-late constraints, and earliest completion time And it highlights the variant with the shortest feasible duration that meets all hard constraints And selecting a variant promotes it to the pending plan without altering the baseline until confirmation And comparison renders in ≤3 seconds
ETA Accuracy and Travel-Time Modeling Validation
Given a standardized test set of 100 multi-stop routes across urban and suburban areas with provider reference travel times and observed parking times When the system computes ETAs and buffers Then median ETA error versus the reference is ≤10% and 90th percentile error is ≤20% at the leg level And property-level parking time defaults are applied by area type and can be overridden per property And buffer sizing ensures that planned arrivals are on-time (≤0 minutes late) in at least 95% of simulated runs using current traffic feeds

Polite Redirect

When a request hits quiet hours, instantly send a branded, courteous response with one‑tap alternative time options and a waitlist. Buyer agents get fast clarity without back‑and‑forth, and you keep control of boundaries while still capturing the booking at the next best time.

Requirements

Quiet Hours Detection & Routing
"As a listing agent, I want requests that arrive during my quiet hours to be auto-detected and handled politely so that I maintain boundaries without losing booking opportunities."
Description

Automatically detect incoming showing requests that fall within agent-defined quiet hours (per agent and per listing, timezone-aware) and route them to automated handling. Pulls rules from TourEcho’s notification settings, honors holidays and on-call exceptions, and classifies channel (SMS, email, portal) and requester type. Ensures requests are intercepted before human notification, tags the thread as Polite Redirect, and hands off to the auto-reply workflow while preserving the original conversation context. Provides safeguards for duplicate requests, rate limiting, and opt-out compliance.

Acceptance Criteria
Timezone-Aware Quiet Hours Detection (Per Agent & Per Listing)
Given a listing has quiet hours configured with timezone TZ_L and the agent has quiet hours configured with timezone TZ_A And the system resolves timezone precedence as listing timezone over agent timezone over account default When a request timestamp falls within the effective quiet hours window in the resolved timezone Then the request is marked quiet_hours_intercept = true and routed to automated handling And if the timestamp is outside the quiet hours window, the request is not intercepted and follows normal notification flow And requests exactly at the quiet-hours start time are treated as inside the window; requests exactly at the end time are treated as outside And detection correctly accounts for daylight saving time transitions in the resolved timezone
Holiday and On-Call Exception Handling
Given a holiday calendar is assigned with a rule to treat the day as quiet hours for the listing or agent When a request arrives on such a holiday Then the request is intercepted as if within quiet hours And if an on-call exception window overlaps the request time, on-call overrides quiet hours and the request is not intercepted And the on-call exception is evaluated in the same timezone resolution as quiet hours
Channel and Requester Type Classification
Given an inbound request arrives via SMS gateway, email ingestion, or portal API When the request is parsed Then the channel is classified as sms, email, or portal and stored as thread metadata And requester_type is set to buyer_agent if the sender matches a verified agent profile or known buyer-agent contact; otherwise consumer; if insufficient data, unknown And classification occurs before routing and is available to downstream workflows
Pre-Notification Interception, Tagging, and Context Preservation
Given a request qualifies for quiet-hours interception When routing occurs Then all human notifications to the agent/broker are suppressed for this request And the conversation thread is tagged Polite Redirect And the auto-reply workflow is invoked with the original conversation context preserved And for email, Message-ID/In-Reply-To threading headers are retained; for SMS, the same phone number thread is used; for portal, the original request ID is referenced
Duplicate Request Handling and Rate Limiting
Given duplicate detection is enabled per requester per listing When a second request with the same requester and listing arrives within 2 minutes of the first or with the same external request ID Then no additional automated reply is sent and the request is linked to the existing thread as duplicate And a rate limit of one automated reply per requester+listing per 10 minutes is enforced across channels And events suppressed by dedupe or rate limiting are logged with the reason
Opt-Out Compliance Across Channels
Given a requester has an active opt-out flag for SMS or email Or the inbound SMS message content is one of STOP, STOPALL, UNSUBSCRIBE, CANCEL, END, QUIT (case-insensitive, trimmed) When a quiet-hours interception would trigger an automated reply Then no automated message is sent on the opted-out channel And the contact opt-out status is persisted and the thread remains tagged Polite Redirect without outbound messaging And an opt-out confirmation is sent only if required by channel regulations and not previously sent within 24 hours
Configuration Fallbacks and Defaults
Given no per-listing quiet hours are configured And the agent-level quiet hours are present When a request arrives Then the agent-level quiet hours are used And if neither listing nor agent quiet hours are configured, the request is not intercepted And if no timezone is configured for the listing or agent, the account default timezone is used; if that is missing, UTC is used
Availability Engine & Calendar Sync
"As a listing agent, I want suggested times to reflect my real availability and listing rules so that redirected requests convert without creating conflicts."
Description

Generate the next-best showing times by merging listing availability, agent calendar(s) (Google/Microsoft via OAuth), team on-call schedules, travel buffers, and lockbox constraints. Respect showing rules (durations, prep buffers, no-go windows), listing time windows, and existing holds. Continuously reconcile holds when a time is claimed or expires, push tentative holds to calendars, and release them on timeout or decline. Expose an API for other TourEcho modules to query availability and a fallback if no viable times exist.

Acceptance Criteria
Next-Best Slots Across Calendars and Rules
- Given listing availability windows, showing duration, prep buffer, no-go windows, team on-call schedule, agent calendars (Google/Microsoft), travel buffers, lockbox access windows, and existing holds, when a showing request is evaluated, then the engine returns up to 5 ranked next-best slots that satisfy all constraints. - Then no returned slot overlaps any agent busy block or existing hold, violates listing windows or no-go windows, or omits required prep or travel buffers. - Then each slot includes start, end, agentId, listingId, and a rank score.
Calendar OAuth Sync and Tentative Holds Lifecycle
- Given OAuth-connected Google/Microsoft calendars, when tentative slots are generated, then a matching tentative hold event is created per slot with title "[TourEcho Hold] <Listing Address>", status busy, and extended properties { listingId, holdId, expiresAt }. - When a slot is confirmed, then the corresponding hold event is updated to "[TourEcho Confirmed]" and remains busy; all other overlapping tentative holds for the same agent/listing are deleted within 5 seconds. - When the offer is declined or the hold expires, then the hold event is deleted within 5 seconds and the time is freed on the calendar. - If the calendar API returns an authentication error, then no hold events are created or updated, and an error is logged with correlationId.
Hold Reconciliation and Concurrency
- When two or more clients attempt to confirm the same holdId concurrently, then exactly one confirmation succeeds and the others receive a 409 slot_unavailable response. - Then hold creation, confirmation, expiration, and deletion operations are idempotent; repeated identical requests result in the same final state without duplicate events. - Then confirming a slot for a listing/agent releases all overlapping tentative holds for that listing/agent within 5 seconds and updates availability accordingly. - Then an audit log entry (create/update/delete, actor, timestamp, correlationId) is recorded for each hold lifecycle event.
Availability API Contract and Fallback
- Given GET /api/v1/availability?listingId={id}&start={ISO}&end={ISO}, then respond 200 with { slots: [ { start, end, agentId, listingId, rank, lockboxId? } ], noAvailable: false } when feasible slots exist. - If no feasible times exist, then respond 200 with { slots: [], noAvailable: true, fallback: { waitlistToken, nextWindowStart } }. - Given POST /api/v1/holds/confirm { holdId }, then on success respond 200 and mark the slot confirmed; on failure due to contention respond 409 slot_unavailable; for expired holds respond 410 gone. - All endpoints require valid bearer auth and validate inputs; invalid parameters respond 400 with details.
Travel Buffer Enforcement
- When computing candidate slots, then exclude any slot that yields less than the configured travel buffer between adjacent confirmed or tentative showings for the same agent (both before and after). - Travel buffer is computed using distance between property addresses; if travel time cannot be resolved, apply a default 15-minute buffer and record exclusion reason "travel_unknown" where applicable. - On confirmation or release of a hold, recompute affected adjacent candidate slots and update availability within 10 seconds.
Lockbox and Listing Window Constraints
- Only return slots fully contained within both the listing’s time windows and the lockbox access windows; exclude blackout dates and no-go windows with reason "blackout" or "no_go". - If the lockbox requires a code refresh lead time, ensure at least that lead time is included in the prep buffer before slot start. - For listings with multiple lockboxes, include the lockboxId on each slot and only use lockboxes that satisfy access rules.
Quiet Hours Compliance and On-Call Alignment
- When a request occurs during defined quiet hours, then do not return any slots that start within quiet hours. - Then return the top 3 slots that begin at or after quiet hours end and align with the current on-call agent schedule for the listing’s team. - If fewer than 3 such slots exist, return all that qualify; if none qualify, set noAvailable=true in the API response and include a waitlistToken.
Branded Auto-Reply Composer
"As a broker-owner, I want courteous, on-brand auto-replies during quiet hours so that buyer agents get clarity without manual effort from my team."
Description

Create a customizable, brand-safe response template that politely explains quiet hours and offers one‑tap alternative time options. Supports per-brokerage themes, agent headshot/logo, property details, dynamic variables (first name, address, MLS ID), channel‑specific formatting (SMS/email), and localization. Includes compliance footer and opt‑out keywords. Provides preview/testing, A/B variant support, and message throttling. Integrates with TourEcho messaging so replies are threaded with the original request.

Acceptance Criteria
Quiet Hours Auto-Reply Trigger and Threading
Given quiet hours are configured for the listing or agent and an inbound showing request is received during those quiet hours via SMS or Email When the system receives the request Then a branded auto-reply is sent within 10 seconds And the message politely states quiet hours and the next response window using the property’s timezone And the message includes at least 2 and at most 5 one-tap alternative time options derived from the next available showing windows And if no slots are available in the next 7 days, a waitlist link is included instead of time options And the reply is posted to the same TourEcho conversation/thread as the original request and is visible in the agent’s inbox And the send event is logged with request_id, conversation_id, recipient, channel, and sent_at timestamp And no auto-reply is sent if the request arrives outside quiet hours
Per‑Brokerage Branding and Agent Identity Rendering
Given a brokerage theme (colors, typography) and agent identity (headshot/logo) are configured When rendering the auto-reply Then the Email uses the brokerage theme for header, footer, and button styles and includes the agent headshot/logo And the SMS omits images but includes the brokerage short name and agent display name And image assets are served over HTTPS, include alt text, and meet constraints (<=200KB; square 1:1 headshot, logo 1:1–4:1) And if any brand asset is missing or invalid, the default TourEcho theme is applied and the send proceeds without blocking
Dynamic Variable Resolution and Safe Fallbacks
Given the template contains variables such as {{first_name}}, {{property_address}}, {{mls_id}}, and {{agent_first_name}} When generating a preview or sending an auto-reply Then all variables are resolved from the request, listing, and agent records And no unresolved placeholders (e.g., {{...}}) remain in the final message And if a value is missing, a configured fallback is used (first_name→"there", mls_id→"N/A") And values are safely escaped for HTML in Email and sanitized for SMS And property details (address, MLS ID) match the listing linked to the request_id
Channel‑Specific Formatting (SMS vs Email) with Compliance Footer and Opt‑Out
Given the channel is SMS When composing the SMS auto-reply Then the total SMS body is <=320 characters and all URLs are shortened to <=23 characters And each time option is a distinct short link labeled with the slot (e.g., "10:30a", "2:00p") that deep‑links to the booking flow prefilled with property and agent And the SMS ends with compliance text including "Reply STOP to opt out." And if the inbound message contains an opt‑out keyword (e.g., STOP, UNSUBSCRIBE), no auto‑reply is sent and the contact is added to the suppression list Given the channel is Email When composing the Email auto-reply Then the Email includes a branded subject, preheader, time‑option buttons, property card, and a compliance footer with brokerage legal name and postal address And clicking any option deep‑links to the booking flow with request context and includes UTM and variant_id parameters
Localization and Timezone‑Aware Messaging
Given the agent/brokerage locale preference and the property’s timezone are configured When composing an auto-reply Then static copy is selected from the locale pack matching the recipient’s preferred language or the agent default And dates/times are formatted per locale (e.g., 24h vs 12h) and the property’s timezone And if a translation key is missing, English (en‑US) is used and the event is logged as i18n_missing And right‑to‑left languages render correctly in Email and the SMS text order is preserved
Preview, Test Send, and A/B Variant Selection
Given the agent opens the Composer preview When switching between SMS and Email previews Then an accurate render is shown with variables resolved using sample data and channel‑specific formatting applied And the agent can send a test to a verified test number/email without contacting the buyer agent; test messages are labeled TEST and excluded from production analytics Given two active variants A and B with a configured split (e.g., 70/30) When auto‑replies are sent in production Then traffic is allocated within ±2% of the configured split over any rolling 100 sends And each message carries a variant_id for analytics and is stored with the send event
Message Throttling and Duplicate Suppression
Given multiple inbound requests from the same recipient for the same property within 12 hours When evaluating auto-reply eligibility Then at most one auto‑reply is sent per recipient per property within a 12‑hour window And a per‑agent rate limit of 60 auto‑replies per minute is enforced; overflow is queued and sent within 2 minutes or suppressed with a throttle_suppressed log entry And suppressed messages do not create additional conversation threads or analytics entries
One‑Tap Time Picker Links
"As a buyer agent, I want one-tap options to pick a new showing time so that I can quickly reschedule without back-and-forth."
Description

Embed secure, signed links in the auto-reply that present 3–5 best-fit time options and a custom picker. Designed mobile-first with instant confirm/decline flows, optional contact verification, and ADA-compliant UI. Selecting a time confirms the showing, writes back to the listing schedule, posts calendar invites (ICS), and updates the conversation thread. Holds expire after a configurable window, with fallback to waitlist or alternate channels. Supports SMS deep links and email-safe URLs.

Acceptance Criteria
Mobile Time Options and Custom Picker Display
Given a quiet-hours auto-reply with a signed link is sent to a buyer agent on a mobile device When the link is opened Then the page displays between 3 and 5 best-fit available time options derived from the listing’s availability and agent preferences And the page provides a clearly labeled “More times” custom time picker covering at least the next 14 days (configurable) And each option shows date, start time, duration, and the listing’s timezone And the view loads in under 2 seconds on a 4G connection And all tap targets are at least 44x44 px And if fewer than 3 options exist, Then display all available options plus the custom picker
Instant Confirm Writes Back and Sends ICS
Given the recipient taps one of the presented time options When the slot is still available Then the showing is confirmed immediately (unless contact verification is enabled) And the booking is written to the listing schedule atomically with no double-booking And an ICS invite is sent to the buyer agent and listing agent including property address, start and end times, timezone, and a confirmation ID And the conversation thread updates with a confirmation message within 2 seconds of booking And the UI displays a confirmation screen with Add-to-Calendar and Showing Instructions links
Decline and Suggest Alternative Flow
Given the recipient taps “None of these times” When they open the custom picker and select an alternative within available windows Then the time is confirmed instantly if free and the same write-back/ICS/thread update rules apply And if the selected time is unavailable, Then present a propose-time flow that adds the user to the waitlist and acknowledges receipt within 2 seconds And if the recipient chooses to decline without proposing, Then offer Join Waitlist and Alternate Channels (call/text) actions and update the thread accordingly
Optional Contact Verification Gate
Given contact verification is enabled for the listing When a recipient selects a time Then a 6-digit code is sent over the same channel (SMS or Email) and must be entered to proceed And codes expire after 10 minutes, with a maximum of 5 attempts and 5 code sends per hour per recipient And upon successful verification, the booking finalizes and triggers write-back, ICS, and thread update And upon failure or timeout, the hold (if any) is released and the recipient is offered to reattempt, pick another time, or join the waitlist
Hold Expiration and Fallback Behavior
Given a slot is placed on hold due to pending verification or incomplete flow When the configurable hold window elapses (default 10 minutes) Then the hold is released back to inventory and the slot becomes available to others And the recipient is notified that the hold expired with options to reselect times or join the waitlist And the listing schedule reflects the release with no residual locks And an expiration event is posted to the conversation thread with a timestamp
ADA-Compliant Time Picker Accessibility
Given a user relies on assistive technologies When they navigate the time picker Then all interactive elements meet WCAG 2.1 AA (contrast ≥ 4.5:1, visible focus, ARIA roles and labels) And full keyboard/switch navigation supports opening the picker, selecting a time, and confirming without touch And screen readers announce option details (date, time, duration, state) and state changes (held, confirmed, error) And all interactive elements maintain a minimum 44x44 px touch target size
Secure Links, Multi-Channel Compatibility, and Invalid Handling
Given a time-picker link is generated Then it includes a signed, single-use token scoped to the listing and requestor and expiring within 24 hours (configurable) When opened from SMS Then it deep-links to the mobile web/app experience without requiring copy-paste When opened from Email Then the URL is email-safe (RFC-compliant) and functions across major clients even with link wrapping/tracking When a link is reused, tampered with, or expired Then show an Expired/Invalid screen with one-tap Request New Link and Join Waitlist options, and do not reveal PII And token validation failures return 401/403 and are logged without indicating booking existence
Waitlist Enrollment & Auto-Offer
"As a listing agent, I want to capture demand via a waitlist so that I can fill openings automatically and reduce vacancy in the calendar."
Description

When no times are accepted or available, allow buyer agents to join a listing-specific waitlist with preferences (days, time ranges). Prioritize by request time and agent preference, watch for cancellations or newly opened slots, and auto-notify the next agent with a one‑tap claim link. Enforce fair offer windows, throttle notifications, and capture declines to move to the next candidate. All actions are logged on the listing timeline and analytics.

Acceptance Criteria
Waitlist Enrollment When No Slots Available
- Given a buyer agent requests a showing and no times are available for the listing, When the agent selects Join Waitlist and submits valid preferences (at least one day and one time range), Then the system records the enrollment with agent ID, listing ID, preferences, and timestamp, returns a branded confirmation, and logs an Enrollment event on the listing timeline. - Given the same agent is already enrolled for the listing, When they attempt to join again, Then the system prevents duplicate entries and instead provides an Update Preferences flow that overwrites the prior preferences without creating a second queue entry, and logs an Update event. - Given a waitlisted agent, When they choose to withdraw, Then the system removes them from the queue within 5 seconds, sends a confirmation, and logs a Withdrawal event.
Priority Queue and Fair Offer Window
- Given multiple agents are enrolled on a listing waitlist, When ordering the queue, Then the system sorts by enrollment timestamp ascending and filters eligibility by each agent’s stated days/time ranges. - Given an auto-offer is issued to an agent, When the offer window starts, Then that agent holds exclusive priority for that specific slot and the offer window defaults to 15 minutes (configurable per listing), visibly displaying remaining time to the agent. - Given an offer window expires without claim, Then the system advances to the next eligible agent within 30 seconds, logs an Expired event for the prior offer, and issues the next offer. - Given throttling rules, Then the system enforces at most 1 active offer per agent per listing at a time and no more than 3 offers to the same agent across that listing within any rolling 24-hour period.
Auto-Offer on Cancellation or New Slot
- Given a booked slot is canceled or a new slot is created for a listing, When the slot becomes available, Then the system matches the slot against the waitlist using day/time-range preferences, selects the highest-priority eligible agent, and sends a branded auto-offer with a one-tap claim link within 60 seconds. - Given no eligible agents match the newly available slot, When the system evaluates the waitlist, Then no offer is sent and the slot remains available in the standard booking flow, and a No Eligible Candidates event is logged. - Given an agent receives an auto-offer, When they select Decline, Then the system logs Declined with optional reason and immediately advances to the next eligible agent.
One‑Tap Claim and Concurrency Handling
- Given an agent receives an auto-offer, When they tap the claim link within the offer window, Then the system atomically reserves the slot for that agent, returns a confirmation with details, updates calendars within 5 seconds, and logs a Claimed event. - Given two or more agents attempt to claim the same slot, When the first successful claim is recorded, Then subsequent attempts return Slot Taken, keep those agents on the waitlist (unless they opt out), and log Contested Claim events. - Given a claim link, When accessed after the offer window expires, Then the system displays Offer Expired and provides Stay on Waitlist and Update Preferences options. - Given transient network retries or double-taps, When duplicate claim requests arrive, Then the system handles them idempotently (no duplicate bookings) and logs a single Claimed event.
Declines and Preference Management
- Given an agent receives an offer, When they choose Decline, Then the system captures an optional reason, retains the agent on the waitlist unless they explicitly opt out, and suppresses re-offers for that same slot occurrence. - Given an agent updates their availability preferences, When the update is saved, Then subsequent matching uses the new preferences and the agent’s original queue position is preserved (no loss of priority), and the change is logged. - Given re-offer logic, When a slot’s time does not intersect an agent’s stated time ranges, Then the system must not send an offer to that agent.
Branded Auto-Offer Notification Content & Delivery
- Given an auto-offer is sent, When the message is delivered, Then it must include listing address, slot date/time, explicit expiration timestamp and remaining minutes, one-tap Claim link, Decline/Stay Waitlisted link, opt-out text, and support contact, using the listing’s branding. - Given channel preferences, When sending the offer, Then the system delivers via the agent’s preferred channel (e.g., SMS, email, in-app) with fallback to the next channel on failure, and records delivery/failed receipts. - Given notification throttling, When issuing offers, Then the system limits to 1 offer message per agent per 15 minutes per listing and a maximum of 60 offer messages per listing per hour overall, queuing excess and logging Throttled events.
Timeline, Audit, and Analytics Logging
- Given any waitlist-related action (enrollment, update, offer sent, delivered, claimed, declined, expired, withdrawn, throttled, no-eligible), When it occurs, Then the system appends an immutable, timestamped entry to the listing timeline with actor, agent ID, slot ID, outcome, and metadata. - Given analytics dashboards, When events are processed, Then the system updates per-listing metrics including current waitlist size, offer send-to-claim conversion, median time to fill from cancellation, counts of declines/expirations/throttled offers, and average notification latency, with data freshness ≤5 minutes. - Given data export and audit, When exporting the timeline and metrics, Then the system provides CSV/JSON exports with stable schemas and redacts PII according to platform privacy rules and recorded opt-outs.
Agent Override & Escalation Controls
"As a listing agent, I want the ability to override or escalate specific requests so that high-priority situations are handled immediately."
Description

Provide controls to bypass quiet hours per request, per listing, or temporarily for emergencies. Include escalation routing to an on-call teammate, snooze/resume automation, and manual send of a custom message instead of the template. Support keyword/flag detection (urgent, cash, tight timeline) to prompt override suggestions. All overrides are permissioned, audited, and reversible, with clear UI in TourEcho’s inbox and mobile app.

Acceptance Criteria
Per-Request Quiet Hours Override
Given a showing request for Listing L arrives during quiet hours and the current user has "Override Quiet Hours" permission When the user selects "Override for this request" in the request view (web or mobile) and confirms with an optional reason Then the polite redirect automation is suppressed for this request And a scheduling time-picker is presented immediately And the request displays an "Overridden" badge in inbox and mobile And an audit log record is created with action=override, scope=request, listingId=L, userId, timestampUTC, reason
Temporary Listing-Wide Quiet Hours Bypass
Given the user opens Listing L settings and has permission to manage quiet hours When the user enables "Bypass quiet hours" with a duration of 15–240 minutes and saves Then all requests for Listing L during that window bypass polite redirect automation And a persistent banner shows "Quiet hours bypass active" with countdown and a "Resume now" control in the listing thread on web and mobile And an audit record is created with action=bypass_start and later action=bypass_end when the window expires or the user resumes And upon expiry or resume, behavior returns to normal without affecting in-flight scheduled requests
Emergency Escalation Routing with On-Call Fallback
Given an on-call rotation is configured for Team T and a request is marked Emergency (manual toggle or keyword flag) during quiet hours When the user taps "Escalate to on-call" Then the current on-call receives push and SMS within 5 seconds containing request ID, listing, and Accept/Decline buttons And if not accepted within 3 minutes, escalation auto-routes to the next on-call; after exhausting the list, it routes to the fallback admin And the request shows live status: Delivered, Acknowledged, Accepted, or Auto-routed with timestamps And audit records capture each hop with recipientId and timestamps
Snooze/Resume Polite Redirect Automation
Given quiet hours automation is active for the user or team When the user taps "Snooze redirects" and selects a duration of 5, 15, 30, or 60 minutes Then new incoming requests during the snooze receive no automated polite redirect And a "Snoozed" badge with remaining time is shown in the inbox header and mobile app And at snooze end, automation resumes automatically; the user can also tap "Resume now" to end early And an audit record is created for snooze_start and snooze_end with duration and initiator
Manual Custom Message Send
Given a request arrives during quiet hours When the user selects "Send custom message" and submits text between 10 and 500 characters Then the template redirect is not sent And the custom message is sent via the buyer agent’s preferred channel with listing branding and agent signature And the thread shows the message with tag=Custom and delivery status (Sent/Delivered/Failed) And the action is logged to audit with messageId; server enforces channel rate limits and returns errors on violation
Keyword-Based Override Suggestion
Given keyword detection is enabled with a configurable list and confidence threshold When an incoming request during quiet hours matches with confidence ≥ 0.7 (e.g., "cash offer", "need to see today", "48 hours") Then the request view displays a suggestion banner "High intent detected — consider override" with the matched phrase And tapping "Apply override" pre-selects Override for this request and pre-fills reason=High intent keyword And suggestions and user actions are logged with fields: requestId, matchedKeywords, confidence, userAction (accepted/ignored)
Permissioned, Audited, and Reversible Overrides
Given role-based permissions are configured for Override, Escalate, Snooze, and Custom Message When an unauthorized user attempts any of these actions Then UI controls are disabled with tooltip explaining required role, and API returns 403 with error code And when an authorized user performs any action, an audit record is stored with fields: action, scope, initiator, timestampUTC, reason, previousState, newState And a "Revert" option is available for 15 minutes for request-level overrides; selecting it restores pre-override automation for future steps and updates badges; previously sent messages remain in thread And audit displays reversals with linkage to the original action
Activity Analytics & Audit Logging
"As a broker-owner, I want visibility into how Polite Redirect performs so that I can optimize templates and policies to improve conversion and compliance."
Description

Track and surface key metrics: quiet-hour intercept rate, auto-reply send rate, option click-through, conversion to confirmed showing, average time-to-reschedule, and waitlist fill rate. Provide per-listing and portfolio rollups, export to CSV, and webhook events. Maintain immutable audit logs of messages, link clicks, holds, confirmations, and overrides for compliance and dispute resolution, with configurable retention policies.

Acceptance Criteria
Metrics Computation: Quiet-Hour Intercept & Auto-Reply Send Rates
Given a listing and a selected date range And N inbound showing requests occurred during quiet hours And I requests were intercepted by Polite Redirect And A auto-replies were successfully sent When the user views the listing analytics for that date range Then quiet-hour intercept rate displays as round((I/N)*100, 2)% And auto-reply send rate displays as round((A/I)*100, 2)% And raw counts N, I, and A are displayed alongside the rates And when N = 0, both rates display 0% with a no-data state instead of errors And calculations exclude events flagged as test/sandbox
Funnel Metrics: Option Click-Through, Conversion, and Time-to-Reschedule
Given M messages containing alternative time options were sent in the selected date range And U unique recipients clicked at least one option link from those messages within a 24-hour dedup window And C confirmed showings resulted from those messages When the user opens the funnel analytics for that date range Then option click-through rate displays as round((U/M)*100, 2)% with U and M shown And conversion to confirmed showing displays as round((C/M)*100, 2)% with C and M shown And average time-to-reschedule displays as the mean hours between auto-reply sent and recipient confirmation of a new time, rounded to 2 decimals And cancellations and expired holds are excluded from C and the time-to-reschedule calculation And multiple clicks by the same recipient on the same message within 24 hours count once toward U
Waitlist Fill Rate Calculation
Given H total waitlist holds were created in the selected date range And F holds were promoted to confirmed showings When the user views waitlist metrics for that date range Then waitlist fill rate displays as round((F/H)*100, 2)% with F and H shown And when H = 0, the rate displays 0% with a no-data state instead of errors And holds cancelled before any buyer notification are excluded from H And events flagged as test/sandbox are excluded
Portfolio Rollups & Date/Timezone Filters
Given an agent or broker selects portfolio view with a date range and timezone TZ When analytics are displayed Then portfolio metrics equal the aggregation of listing-level numerators and denominators normalized to TZ, with rounding applied only after aggregation And changing the date range or TZ updates all metrics and charts consistently across portfolio and listing detail views And clicking a portfolio metric drills down to the corresponding listing-level view with the same filters applied
CSV Export for Metrics and Audit Logs
Given a user requests a CSV export for the selected scope (listing or portfolio) and date range When exporting metrics Then a UTF-8 CSV with headers [listing_id, listing_address, date, metric_name, numerator, denominator, value, timezone, generated_at] is generated within 60 seconds for up to 50,000 rows And the exported counts and calculated values match the on-screen metrics for the same filters When exporting audit logs Then a UTF-8 CSV with headers [event_id, timestamp_iso, listing_id, actor_type, actor_id, event_type, message_id, link_id, hold_id, showing_id, override_flag, ip, user_agent] is generated for the date range And timestamps in both exports reflect the selected timezone
Webhook Events Delivery for Key Actions
Given a webhook endpoint is configured and enabled When any of the following occurs: auto_reply_sent, option_link_clicked, hold_created, hold_promoted_to_confirmed, override_applied Then a POST request is delivered within 60 seconds with a JSON payload containing event_id, event_type, occurred_at (ISO), listing_id, actor_type, relevant entity IDs, and an idempotency_key And the request includes an HMAC-SHA256 signature header using the configured secret And delivery is at-least-once with exponential backoff retries for up to 24 hours until a 2xx is received And duplicate deliveries for the same event carry the same idempotency_key And a delivery log shows status, last attempt, next retry, and response code
Immutable Audit Log with Retention Policies
Given audit logging is enabled When a message is sent, an option link is clicked, a hold is created or promoted/demoted, a showing is confirmed, or an override is applied Then an audit entry is appended with timestamp (ms), actor, event_type, listing_id, and related entity IDs, channel, and metadata And the audit store is append-only: create operations are allowed; update and delete operations are rejected via UI and API And attempted modifications are logged as separate security events And an organization-level retention period (e.g., 90/180/365 days) can be configured And entries older than the retention period are purged automatically with a tombstone record of purge action And authorized users can filter logs by listing, event_type, actor, and date range and results match CSV exports and webhook events for the same filters

Override Escalation

Allow time‑boxed exceptions with a single tap. Captures reason, routes approval to the right manager or team lead, and logs a clean audit trail. Urgent showings happen when they must, without eroding policy or creating shadow scheduling.

Requirements

One-Tap Override Request Entry
"As a listing agent, I want to request a policy override with one tap and a short reason so that urgent buyers can be accommodated without back-and-forth."
Description

Provide a single-tap action from showing cards and calendar views to initiate an override request. Pre-populate listing, buyer, and time data; require a concise reason; allow selecting the policy being overridden such as blackout hours, minimum lead time, or occupancy cap. Let the requester propose a start and end time for the exception with sensible defaults and a maximum time-to-live. Support optional attachments like messages or photos for context. Display current policy, potential conflicts, and a risk indicator before submission. Validate inputs, handle time zones, and surface success or failure states inline without leaving the scheduling flow.

Acceptance Criteria
One-Tap Entry from Showing Card and Calendar
Given a user is viewing a showing card or a calendar event in TourEcho When the user taps the "Request Override" action once Then the override request sheet opens in-context without full-page navigation And the action is available and visible on both showing cards and calendar views And the action is disabled or hidden only when the listing has no overridable policies or the user lacks permission
Auto-Population of Core Context Fields
Given the override request sheet is opened from a specific showing Then the listing details (ID, address/MLS), buyer info (name/agent), and scheduled start/end time are pre-populated And the pre-populated values exactly match the source showing record And the proposed override start/end fields are editable, while source showing metadata remains read-only
Required Reason and Policy Selection Validation
Given the override request sheet is open When the user attempts to submit without a reason or without selecting a policy to override Then submission is blocked and inline validation messages are shown for each missing field And the reason text must be between 10 and 250 characters; otherwise an inline error is shown And the policy selector offers only: Blackout hours, Minimum lead time, Occupancy cap, Other (if enabled) And submission remains disabled until all validation errors are resolved
Override Window Defaults, Bounds, and Time Zones
Given the override request sheet opens for a selected showing Then the proposed start defaults to the showing start and the proposed end defaults to the showing end And the proposed end must be after the proposed start and within the organization-configured maximum TTL window; otherwise an inline error is displayed and submission is disabled Given the listing and requester may be in different time zones When times are displayed in the UI Then each time shows an explicit time zone label and conversions are correct to the minute And the submitted payload stores times normalized to UTC with the listing time zone included as metadata
Pre-Submit Policy Context, Conflicts, and Risk Indicator
Given a policy to override is selected and a proposed window is set Then the current policy text (e.g., blackout window, required lead time, occupancy limit) is displayed in plain language And any potential conflicts for the proposed window are listed with counts (e.g., overlapping showings, cap exceeded, lead time violation) And a risk indicator (Low/Medium/High) is displayed And the risk indicator is computed as: High if any hard conflict (overlap or cap exceeded); Medium if only soft rule violation (e.g., lead time) and no overlaps; Low if no conflicts
Optional Attachments: Message and Photos
Given the override request sheet is open When the user adds an optional message and/or photo attachments Then the message allows up to 500 characters with a live character counter And up to 3 image files (JPEG/PNG) can be attached, each up to 10 MB, with previews and the ability to remove before submission And invalid file types or sizes are blocked with inline errors And attachments are included in the submitted request payload
Inline Submission Feedback Without Leaving Scheduling Flow
Given all validations pass When the user submits the override request Then an inline loading state is shown and the current view remains on the scheduling context And on success, a success confirmation is shown inline and the related showing surfaces a "Pending approval" status tag with the new request ID And on failure, a specific inline error message is shown with a retry option, and the form data is preserved
Time-Box Constraint Engine
"As an operations manager, I want overrides to auto-expire and restore default rules so that exceptions don’t silently become new policy."
Description

Implement a server-side engine that enforces explicit start and end times for each override and automatically reverts to baseline policy at expiration. Prevent chained or overlapping exceptions from extending beyond configured maximum duration. Resolve conflicts when multiple pending overrides target the same listing or time window and clearly communicate the active state. Handle daylight savings and cross-time-zone listings correctly. Cancel or invalidate stale requests and cleanly roll back any temporary schedule changes if an override expires before use. Expose state transitions to the UI and notifications for full transparency.

Acceptance Criteria
Time-Boxed Override Activation Window
Given a listing with a baseline policy and an approved override with explicit start S and end E in the listing’s IANA time zone When current time is within [S, E) Then the engine applies the override rules, sets override.state = active within 1 second, and exposes policy_effective = override Given current time is < S or >= E When evaluating policy Then policy_effective = baseline and override.state is scheduled (if < S) or expired (if >= E) Given an override request where E ≤ S or S/E is missing When submitted Then the engine rejects the request with 400 OVERRIDE_INVALID_WINDOW and no override is created
Automatic Reversion to Baseline at Expiration
Given an active override with end time E When current time reaches or exceeds E Then the engine reverts policy_effective to baseline and sets override.state = expired within 2 seconds without manual intervention And a transition record is appended with from=active, to=expired, occurred_at (UTC), actor=system
Guardrail Against Chained/Overlapping Overrides
Given org.maxOverrideDuration = M hours for a listing When a new override O2 is submitted that overlaps or is contiguous with existing approved overrides O1..On such that the union window length exceeds M Then the engine rejects O2 with 409 OVERRIDE_MAX_DURATION and leaves O1..On unchanged Given multiple requests attempt to split one long window into several shorter, adjacent windows to bypass M When evaluated by the engine Then the union-of-windows rule still applies and any request that would exceed M is rejected with 409 OVERRIDE_MAX_DURATION
Conflict Resolution for Concurrent Overrides on Same Slot
Given two or more approved overrides for the same listing with overlapping time windows When the overlap window begins Then the engine ensures at most one active override per instant by selecting the highest-priority override (approval_role priority: manager > team_lead > agent; tie-breaker earliest approval_time) And non-selected overrides are marked state = conflicted for the overlapping segment and cannot modify bookings during that period And the engine exposes effective_active_override_id for any instant query on that listing
Correct Handling of Time Zones and Daylight Savings
Given a listing with tz = IANA zone (e.g., America/New_York) and an override defined with S and E in the listing’s local time When persisted Then the engine stores S and E as ZonedDateTime and evaluates activation by UTC instants Given S or E falls on a non-existent local time due to DST forward shift When submitted Then the engine rejects with 422 INVALID_LOCAL_TIME and includes next_valid_instant in the response Given S and E span a DST change (forward or backward) When computing active duration and eligibility boundaries Then the engine honors elapsed UTC time and boundaries align to the correct UTC instants
Stale Requests and Unused Override Rollback
Given a pending override request not approved within pending_ttl (default 24h) When pending_ttl elapses Then the engine auto-cancels the request with state = stale_canceled and records reason_code = TTL_EXPIRED Given an approved override that expires before any booking or availability change was created under it When E is reached Then the engine rolls back any temporary blocks or availability adjustments within 2 seconds, sets state = expired_unconsumed, and records reverted_items_count in the audit log
State Transition Events for UI and Notifications
Given any override changes state (created, submitted, approved, active, conflicted, expired, canceled, stale_canceled, expired_unconsumed) When the transition occurs Then the engine emits an event within 500 ms containing override_id, listing_id, from_state, to_state, occurred_at (UTC), actor, reason_code, correlation_id, and sequence And the state transitions API returns an ordered, append-only list with monotonic sequence numbers and no gaps for that override
Smart Approver Routing
"As a team lead, I want override requests routed to the right approver with SLAs so that urgent cases are handled fast and by the accountable person."
Description

Route each override request to the correct approver based on listing ownership, team hierarchy, coverage schedule, and reason category. Support primary and fallback approvers, out-of-office detection, and round-robin pools when appropriate. Apply configurable service-level targets for acknowledgement and decision, with escalation paths by severity and price tier. Allow rule-based multi-step approvals for high-risk categories and after-hours windows. Provide a rules editor and safe defaults so organizations can tune routing without code changes.

Acceptance Criteria
Primary Approver Selection by Ownership, Hierarchy, Schedule, and Reason
Given routing rules are configured with a defined precedence order for attributes (e.g., reason category, coverage schedule, team hierarchy, listing ownership) And a listing belongs to a team with a designated owner and coverage schedule And an override request is submitted with a specific reason category When the request is created Then the system selects the primary approver whose rule best matches the request attributes according to the configured precedence And exactly one assignee (user or pool) is set as the current assignee And the decision record stores ruleId, ruleVersion, matchedAttributes, assigneeId, and assignmentTimestamp And the assignee is notified within the configured notification window
Fallback Routing and Out-of-Office Handling
Given the selected primary approver is marked Out of Office or outside their coverage schedule at assignment time When the system assigns the request Then the request is routed to the configured fallback approver according to the rule And an audit entry records fallback_reason (OOO or outside_coverage), priorAssigneeId, newAssigneeId, and timestamp And only the new assignee is notified Given the primary approver does not acknowledge within the configured acknowledgment SLA When the acknowledgment SLA elapses without acknowledgment Then the request is reassigned to the fallback (or next fallback in chain) And the previous assignment is closed with status reassigned_due_to_ack_timeout And the reassignment event is logged and the new assignee is notified
Round-Robin Pool Assignment and Fairness
Given a routing rule targets an approver pool with N members and round-robin is enabled And availability is known for each member (available/unavailable) When M new requests are routed to the pool Then assignments cycle through available members in order, skipping unavailable members at assignment time And over any contiguous window of size equal to the number of available members, each available member receives at most one assignment And after M >= N requests with stable availability, the difference between the most- and least-assigned available members is <= 1 And if a request for the same listing recurs within 24 hours, it is assigned to the prior member if still available; otherwise to the next in sequence And all assignment decisions are logged with poolId, memberId, sequenceIndex, and timestamp
SLA Timers and Escalation by Severity and Price Tier
Given an SLA table defines acknowledgmentTarget and decisionTarget for combinations of severity and priceTier And an escalation path is configured per severity and priceTier When a request is created with a specific severity and priceTier Then acknowledgment and decision timers are started according to the configured targets And reminder notifications are sent at configured warning thresholds before each timer expires And if acknowledgment is not received before acknowledgmentTarget, the request escalates to Level 1 per the configured path And if a decision is not recorded before decisionTarget, the request escalates per the next level in the path And escalations select recipients who are in coverage and not Out of Office, falling back as needed And all timer start, reminder, acknowledgment, decision, and escalation events are timestamped and stored
Multi-Step Approvals for High-Risk and After-Hours
Given a rule defines a multi-step approval sequence for high-risk reason categories or after-hours windows When a matching request is submitted Then Step 1 is assigned to Approver A as defined by the rule And upon Approve at Step 1 within SLA, Step 2 is assigned to Approver B determined by hierarchy or role mapping And if any step is Rejected, the request is marked Rejected and remaining steps are not executed And if any step times out, escalation occurs per the step-specific SLA path And the UI/API expose currentStep, totalSteps, pendingApprover, and remainingSLA for the request And the workflow prevents skipping, re-ordering, or modifying steps without publishing a new rules version
Rules Editor, Validation, and Safe Defaults
Given an organization admin opens the Rules Editor When they create or modify routing rules Then the editor enforces permissions (admins can edit; non-admins read-only) And validates for conflicts, cycles, empty approver pools, unreachable steps, and missing tie-breakers And provides a simulation tool that, given sample inputs, predicts assignee(s), SLA targets, escalation path, and approval steps And requires version name and change note to publish And publishes changes as a new version affecting only new requests (in-flight requests continue under the prior version) And supports one-click rollback to a prior version without data loss And provisions safe default rules for new organizations (single-step to listing owner with manager fallback and baseline SLAs) And records an audit entry for each publish or rollback with actor, version, diff summary, and timestamp
Approver Console and Actions
"As an approver, I want a clear summary and one-tap decisions so that I can act quickly while documenting why."
Description

Offer an approver console available in-app, by email, and through chat integrations to review context, including requested window, buyer notes, impacted policies, and schedule conflicts. Enable one-tap approve or reject, modify time-to-live or window, and request more information from the requester without leaving the thread. Require or optionally capture a decision rationale. Show real-time impact and conflicts before committing a decision. Support approving single showings or a bounded window and ensure all actions are captured for audit.

Acceptance Criteria
Multi‑Channel Approver Console Access & Context Completeness
Given an authorized approver receives an escalation via in‑app, email, or chat When they open the Approver Console from any channel Then the console displays: requested time window (start/end with timezone), buyer notes, impacted policy rules (IDs/names), current schedule conflicts (count and list with timestamps), requester identity and role, listing address/ID, override reason, and current TTL And Then the context shown is identical across channels for the same request And Then unauthorized users or expired links are denied access and the event is logged
One‑Tap Approve/Reject Across App, Email, and Chat
Given the Approver Console is visible When the approver selects Approve or Reject Then the decision is submitted with a single interaction without navigating away from the thread or console And Then the requester is notified in the originating channel with the decision and any rationale And Then the request status transitions to Approved or Rejected and is reflected in the listing timeline and requester view
Modify Time‑to‑Live (TTL) or Requested Window Before Decision
Given a pending escalation When the approver edits TTL or adjusts the requested time window in the console Then the changes validate (end after start, within allowable bounds, timezone preserved) and are previewed without page reload When the approver approves Then the updated TTL/window values are applied to the approval and visible to the requester and in audit
Request More Information Without Leaving the Thread
Given the Approver Console is open When the approver selects Request More Info and enters a prompt Then the inquiry posts in the same message thread (app/email/chat) and the escalation status updates to Needs Info When the requester replies in‑thread Then the reply is attached to the escalation, appears in the console, and the approver can act without opening a new view
Real‑Time Impact & Conflict Preview Prior to Commit
Given a pending decision When the approver views the console Then a real‑time preview shows schedule conflicts, affected showings, and policy violations with severity before committing When the approver modifies the window or TTL Then the impact preview updates immediately without page reload
Approve Single Showing or Bounded Window
Given a pending escalation When the approver chooses "This showing only" Then approval applies only to the specified showing instance When the approver chooses "Approve bounded window" and defines start/end, scope (listing), and TTL Then the approval applies to that bounded window for the defined scope, and subsequent requests within the window are marked approved without additional approver action And Then the window approval appears in the listing timeline and can be revoked
Decision Rationale Capture and Immutable Audit Trail
Given a decision is being made When the applicable policy requires rationale Then Approve/Reject is disabled until a non‑empty rationale is entered When rationale is optional Then the rationale field accepts input but does not block decisions Then upon decision, an audit record is created including: request ID, listing ID, actor ID and role, channel, timestamp, action (approve/reject/request‑info), rationale text (if any), before/after values for TTL/window, impact preview snapshot, impacted policy list, and conflict list; the record is immutable and filterable in audit views
Notifications and Escalation SLAs
"As a requesting agent, I want timely updates and automatic escalation so that urgent showings aren’t blocked by slow approvals."
Description

Deliver real-time notifications to requesters and approvers across push, email, SMS, and chat with clear states submitted, acknowledged, approved, rejected, expired. Trigger reminders before SLA deadlines and escalate to alternates when thresholds are missed. Respect organizational quiet hours while allowing urgent paths that override notification suppression with explicit user consent. Provide a compact digest view for daily summaries and failover channels if primary delivery fails. All notifications include deep links to act or view status.

Acceptance Criteria
Real-time Multi-Channel State Notifications
Given a requester submits an override escalation request When the request is saved Then send a notification to the requester and assigned approver via all enabled channels (push, email, SMS, chat) within 15 seconds And include the current state label ("submitted","acknowledged","approved","rejected","expired") and a deep link to act or view status And update notifications on each state change within 15 seconds of the change
Pre-Deadline Reminder Triggers
Given an approval SLA window is configured for acknowledgement (e.g., 30 minutes) And no acknowledgement has been recorded When the time remaining reaches the reminder threshold (e.g., 5 minutes before SLA) Then send a reminder notification to the approver via all enabled channels And include remaining time, a deep link, and the request identifier And cancel all pending reminders immediately upon acknowledgement or decision And do not send duplicate reminders for the same threshold
Automatic Escalation on Missed SLA
Given a primary approver is assigned and an alternate route exists And the acknowledgement SLA expires without acknowledgement When the SLA expires Then set the original approval path state to "expired" And notify alternate approver(s) per routing rules via enabled channels within 15 seconds And include original request context and a deep link And create only one escalation per expired request And notify the requester that the request has been escalated
Quiet Hours with Explicit Urgent Override
Given organization quiet hours are configured in the recipient's timezone When a non-urgent notification is generated during quiet hours Then suppress real-time delivery and queue for the next allowed window and the daily digest And record the suppression event in the audit log When the requester marks the request as urgent Then present an explicit consent prompt describing the quiet-hours bypass And upon consent, deliver notifications via enabled channels within 60 seconds And record consent timestamp, actor, and reason in the audit log
Daily Compact Digest Delivery
Given the user has pending notifications not acted on And the user has a daily digest time configured (default 08:00 local time) When the digest time occurs Then send a single compact digest summarizing counts by state with deep links to each item And exclude items already acted on or sent via urgent path in the prior 24 hours And deliver the digest via email and chat by default, respecting channel preferences
Channel Failover with Delivery Guarantees
Given a notification is queued for the user's highest-preference channel When that channel returns a definitive error or fails after 3 retries over 5 minutes Then mark the attempt as failed with provider error code and timestamp And attempt delivery via the next preferred channel within 30 seconds And continue until at least one channel confirms delivery or all channels fail And emit a monitoring event and surface an in-app alert to the requester if all channels fail
Deep Links Enable One-Tap Action and Status
Given an approver receives a notification containing a deep link When the deep link is opened within 24 hours of send Then open the approval view with request context and allow Acknowledge, Approve, or Reject per policy And require MFA if mandated by organization policy And upon action, update the request state and notify the requester within 15 seconds And if opened after 24 hours, display an expiration message and route to the current status page
Audit Trail and Compliance Export
"As a broker-owner, I want a complete audit trail and export so that we can prove compliance and analyze patterns."
Description

Maintain an immutable, searchable log for every override lifecycle event, capturing requester identity, approver identity, timestamps, reasons, impacted policies, time-to-live values, decisions, and any modifications. Link records to listings, showings, and conversations. Provide filters, retention controls, and redaction options per organization policy. Support CSV export and secure API access for broker compliance systems. Ensure entries are tamper-evident with unique identifiers and record provenance for legal defensibility.

Acceptance Criteria
Log Creation on Override Request
Given an authenticated agent submits an override for a specific listing and scheduled showing with a reason, impacted policies, and a time-to-live value When the override request is created Then an immutable audit record is appended capturing: override_id, org_id, requester_id, requester_role, listing_id, showing_id, conversation_id (if available), reason, impacted_policies[], ttl_value, created_at (UTC ISO8601), client_ip, user_agent, and record_provenance.source (ui/web/mobile/api) And the record is assigned a globally unique identifier (UUID/ULID) And attempts to update any captured fields do not mutate the record but create a new "modification" event linked by parent_id to the original, preserving the original values
Approval Decision Logged with Provenance
Given a manager or team lead reviews an override request When they approve, reject, or adjust the TTL Then a decision event is logged capturing: override_id, approver_id, approver_role, decision (approved/rejected), decision_reason (optional), previous_status, new_status, ttl_value_after, decided_at (UTC ISO8601), and record_provenance (actor type, channel, request_id) And any changes to TTL or reason are recorded as discrete "modification" events with before_value, after_value, modified_by, and modified_at (UTC) And the audit trail for the override displays a complete chronological lifecycle: request → decision → modifications, ordered by timestamp
Search and Filter Audit Log by Attributes
Given a compliance administrator opens the audit log and sets filters for date range, requester_id, approver_id, decision, impacted_policy, listing_id, showing_id, conversation_id, ttl_value range, and free-text in reason When they run the search Then only matching records are returned, sorted by created_at descending by default, and can be toggled to ascending And the system paginates results with accurate total_count and next_cursor values And the query completes within 1.0s P95 for datasets up to 1,000,000 records
CSV Export with Redaction Controls Applied
Given a compliance administrator applies any set of filters and selects Export CSV with redaction policy toggled per organization When the export is generated Then the CSV contains only the filtered records and includes columns: override_id, org_id, listing_id, showing_id, conversation_id, requester_id, approver_id, decision, reason (subject to redaction), impacted_policies, ttl_value, created_at, decided_at, modified_at, provenance.source, integrity_hash, prev_hash And fields designated as PII by the organization are redacted or masked according to policy_version, with a redaction_metadata column indicating policy_version and redaction_fields And the file is UTF-8 encoded, comma-delimited, with a header row; timestamps are ISO8601 UTC; row count equals the UI result count And the download is authorized for the requesting user and expires within 15 minutes
Tamper-Evidence and Unique Identifier Integrity
Given audit trail records exist for an override lifecycle When an integrity verification is executed Then each record exposes unique_id, integrity_hash (content hash), and prev_hash to form an append-only chain per override And recomputing the hashes validates the chain end-to-end with status "valid" for untampered data And any mutation attempt results in a hash mismatch that sets tamper_evident_flag=true and emits a verification failure in the integrity check endpoint
Secure API Access for Compliance Systems
Given a broker compliance system possesses a provisioned service account with the Audit:Read scope When it requests audit trail data via the Audit API over TLS 1.2+ using OAuth2 client credentials Then the API returns only the organization's audit records, honoring the same filters as the UI (date range, identities, decisions, impacted policies, listing/showing/conversation IDs, TTL, reason keyword) And results are paginated via cursor-based pagination (limit, next_cursor) with consistent ordering by created_at And the API supports JSON and CSV formats, enforces rate limits, and returns 401/403 for invalid credentials or insufficient scope
Retention and Redaction Policy Enforcement
Given an organization configures an audit retention period and PII redaction rules When the scheduled retention job runs Then records older than the retention period are purged and replaced by tombstone entries containing override_id, purge_timestamp (UTC), and purge_reason="retention_expiry" And purge actions themselves are logged as audit events And records under legal hold are excluded from purge And PII redaction requests generate a redaction event that removes/masks configured fields while retaining non-PII metadata and includes redaction_provenance (who, when, policy_version)
Role-Based Controls and Guardrails
"As a compliance lead, I want configurable guardrails on overrides so that we prevent abuse while keeping the path open for legitimate urgency."
Description

Implement fine-grained permissions determining who can request, approve, modify, or escalate overrides by role, team, and listing. Enforce configurable daily and weekly request limits, hard blocks for sensitive policies, and two-person approval for high-risk categories. Require minimum reason length and structured reason categories to improve reporting. Optionally step up authentication for high-impact requests with multi-factor prompts. Display inline guidance and just-in-time education when users hit guardrails to encourage compliant behavior.

Acceptance Criteria
Role- and Listing-Scoped Override Request Permissions
Given a user has the request_override permission for their role on a specific team/listing When they attempt to submit an override for that listing Then the submission succeeds and an audit log is created with userId, role, teamId, listingId, and timestamp. Given a user lacks the request_override permission for the target team/listing When they attempt to submit an override Then the system blocks the action, returns error code PERM_DENIED, and displays a policy explanation message. Given a user with permissions on Team A only When they attempt to submit an override for a listing on Team B Then the system blocks the action with PERM_DENIED and no record is created beyond an access-denied audit entry. Given role/team permissions are updated by an admin When the same user attempts their next override action Then the updated permissions take effect immediately without requiring logout.
Two-Person Approval for High-Risk Overrides
Given an override category flagged as HighRisk with two-person approval enabled When the requester submits the override Then the request status is PendingApproval and not active. Given a first approver with approve_override permission (not the requester) When they approve the request Then the status becomes PendingSecondApproval and the override remains inactive. Given a second approver with approve_override permission who is distinct from both requester and first approver When they approve within the configured approval window Then the status becomes Approved and the override activates immediately within the defined time box. Given an approver attempts to approve their own request When they submit approval Then the system blocks with error SELF_APPROVAL_BLOCKED and logs the attempt. Given there are insufficient eligible approvers available (by role/team/listing) When the first approval is recorded Then the request remains PendingSecondApproval and a notification is sent to the requester and approver group; the override is not applied.
Configurable Daily and Weekly Override Request Limits
Given a user has dailyLimit=N and weeklyLimit=M configured for request submissions When they attempt to submit request number N+1 within the same day (team timezone) Then the system blocks the submission, returns LIMIT_DAILY_EXCEEDED, and shows remaining weekly capacity if any. Given the same user has submitted K requests this week where K=M When they attempt to submit another request within the same week (team timezone) Then the system blocks with LIMIT_WEEKLY_EXCEEDED and displays just-in-time guidance. Given the period boundary is reached (midnight for daily, week start per configuration) When the user attempts the next submission after the boundary Then counters reset appropriately and the submission is evaluated against the new period. Rule: Limit counts include all submitted requests in the period regardless of approval outcome; withdrawn/canceled requests still count unless explicitly configured otherwise (config flag honors settings).
Hard Blocks for Sensitive Policy Categories
Given a policy category is configured as Sensitive with hardBlock=true When any user attempts to create or approve an override in that category Then the UI disables submission/approval controls and the API rejects with POLICY_HARD_BLOCK; no override is created or activated. Given an admin changes a category's hardBlock setting When a user next attempts an action in that category Then the new setting takes effect within 60 seconds system-wide. Then the user is shown an inline message explaining the hard block with a link to the governing policy; an audit log entry records the denied attempt with reason category and actor context.
Step-Up MFA for High-Impact Requests
Given a category is configured with stepUpMFA=true for submit and/or approve actions When a user triggers the action and does not have a valid recent MFA session within the configured rememberFor window Then the system prompts for MFA and requires successful completion before proceeding. Given the user fails MFA or the prompt times out within 120 seconds When the action attempt is evaluated Then the request is not created/approved, the user sees MFA_FAILED, and an audit event records the failure (without sensitive secrets). Given the user completes MFA successfully When the action is retried automatically Then the original action completes (subject to other guardrails) and the MFA method and timestamp are appended to the audit record.
Structured Reason and Minimum Detail Requirements
Given reasonCategory is required and populated from a configurable taxonomy When a user submits a request Then they must select a valid category value or the submission is blocked with REASON_CATEGORY_REQUIRED. Given minimumReasonLength=K (characters) is configured When the user enters a reason whose trimmed length < K Then the submission is blocked with REASON_TOO_SHORT and a live character counter indicates remaining characters needed. Then the system stores reasonCategory, reasonText, and character count for reporting and includes them in the audit trail; whitespace-only input is rejected.
Inline Guidance and Just-in-Time Education on Guardrails
Given a user action is blocked by a guardrail (PERM_DENIED, LIMIT_DAILY_EXCEEDED, LIMIT_WEEKLY_EXCEEDED, POLICY_HARD_BLOCK, MFA_REQUIRED/FAILED) When the block occurs Then a contextual guidance panel appears inline with a role- and category-specific explanation, next steps, and links to policy or help. Then the guidance panel does not navigate the user away, and the blocked action cannot proceed until the underlying condition is resolved. Given guidance content is updated by an admin in the CMS When a user next triggers the same guardrail Then the updated content is displayed within 5 minutes; a UX_EVENT guidance_shown is emitted with guardrail type and content version.

Team Rebalance

If a requested time violates an agent’s quiet hours or buffers, auto‑route the showing to on‑duty teammates or showing assistants who meet coverage rules. Preserves the booking, respects wellness policies, and keeps sellers happy with uninterrupted availability.

Requirements

Quiet Hours & Buffer Enforcement
"As a listing agent, I want my quiet hours and buffers automatically enforced so that I’m not booked during protected times while still preserving the buyer’s requested slot."
Description

Detect and enforce per-agent quiet hours and pre/post-showing buffers at scheduling time. The system must be time zone aware, support office-wide wellness policies, holiday exceptions, and per-listing overrides. When a requested slot violates these constraints, prevent direct assignment to the protected agent and trigger Team Rebalance without rejecting the buyer’s requested time.

Acceptance Criteria
Quiet Hours Violation Triggers Rebalance Without Rejection
Given Agent A has quiet hours 20:00–08:00 in America/New_York And a buyer requests a showing at 20:30 for Listing L displayed in America/Chicago When the system evaluates eligibility Then it converts the requested time to Agent A’s timezone and detects it is within quiet hours And it prevents direct assignment to Agent A And it triggers Team Rebalance with the original requested time preserved And no rejection message is shown to the buyer And the confirmation presented to the buyer reflects the requested time slot
Pre/Post-Showing Buffer Enforcement Blocks Conflicts
Given Agent A has a confirmed showing for 10:00–10:30 and buffers of 15 minutes before and after When a new request for 09:30–10:00 is received Then the pre-show buffer (09:45–10:00) is violated, so direct assignment to Agent A is blocked and Team Rebalance is triggered while preserving the requested time And when a new request for 10:40–11:10 is received Then the post-show buffer (10:30–10:45) is violated, so direct assignment to Agent A is blocked and Team Rebalance is triggered while preserving the requested time
Time Zone-Aware Policy Evaluation
Given Listing L is in America/Denver, Agent A is in America/New_York, and Office O is in America/Los_Angeles And Agent A’s quiet hours are 21:00–07:00 in America/New_York When a buyer selects a 16:30 slot as shown in the listing’s timezone Then the system evaluates Agent A’s quiet hours using America/New_York and determines if the slot is within protected hours And office-level wellness policies are evaluated using the office timezone And the time displayed to the buyer remains 16:30 in America/Denver And daylight saving time offsets are correctly applied for all timezones involved
Office Wellness Policy Applied to Agents Without Custom Settings
Given Office O defines default quiet hours 21:00–07:00 and 15-minute pre/post buffers And Agent B has no custom quiet hours or buffer settings When a buyer requests a 06:45 showing for any listing represented by Agent B Then the system treats Agent B as protected by the office policy, blocks direct assignment, and triggers Team Rebalance while preserving the requested time And when Agent C defines custom quiet hours 22:00–06:00, the office defaults no longer apply to Agent C
Holiday Exception Rules Modify Enforcement
Given Office O marks 2025-11-27 as a holiday with an exception rule to suspend quiet hours for the day while keeping buffers enforced And Agent A’s normal quiet hours would otherwise block a 07:30 request on that date When a 07:30 showing is requested on 2025-11-27 Then the system allows direct assignment with respect to quiet hours but still enforces pre/post buffers And when the holiday exception rule is set to extend quiet hours to all-day for 2025-12-25 Then any showing request on that date is treated as violating quiet hours for protected agents; direct assignment is blocked and Team Rebalance is triggered while preserving the requested time
Per-Listing Overrides Take Precedence for That Listing
Given Listing L defines an override of quiet hours to 23:00–06:00 and buffers to 5 minutes, regardless of agent defaults And Agent A’s default quiet hours are 20:00–08:00 with 15-minute buffers When a buyer requests a 22:30 showing for Listing L Then the system permits direct assignment with respect to quiet hours (since 22:30 is outside 23:00–06:00) but enforces the 5-minute buffers defined by the listing And when a 07:30 request is made for Listing L Then the system blocks direct assignment per the listing’s 23:00–06:00 quiet hours and triggers Team Rebalance while preserving the requested time
Coverage Rules Engine
"As a broker‑owner, I want to define who can cover which listings and when so that reassignments follow compliance and service standards."
Description

A configurable rules engine that determines eligible coverage candidates based on role (co‑listing agent, showing assistant), licensure requirements, geography/service area, property type and price band, on‑duty schedules, capacity limits, travel radius, and conflict‑of‑interest filters. Exposes a deterministic scoring/ranking API for Team Rebalance to select the best candidate consistently.

Acceptance Criteria
Role and Licensure Eligibility Filtering
Given a showing request with required roles and jurisdiction When the engine evaluates candidates Then only candidates whose role is in the allowed set for coverage are considered And the candidate holds an active license valid for the property’s jurisdiction on the requested date And candidates with expired, suspended, or missing licenses are excluded with disqualification code LICENSE_INELIGIBLE And the response includes eligibility flags per candidate for role and licensure
Geography, Service Area, and Travel Radius Compliance
Given the property location and requested time and each candidate’s service areas and max travel radius When evaluating eligibility Then include candidates whose service area includes the property or distance to property is less than or equal to the candidate’s max travel radius And exclude candidates exceeding radius with code OUT_OF_RADIUS And the response includes the computed distance in meters for each considered candidate
On‑Duty Schedule and Capacity Limit Enforcement
Given the requested time window and each candidate’s on‑duty schedule and capacity limits for overlapping intervals When evaluating eligibility Then include only candidates whose on‑duty window fully covers the requested time window And exclude candidates at or above capacity for the requested time window with code OVER_CAPACITY And capacity calculation counts all non‑cancelled assigned showings that overlap the requested window And the response includes remaining capacity for each included candidate
Property Type and Price Band Eligibility
Given the listing’s property type and list price and each candidate’s allowed property types and price band When evaluating eligibility Then include only candidates that allow the listing’s property type and where min_price_band <= list_price <= max_price_band And exclude ineligible candidates with code TYPE_OR_PRICE_INELIGIBLE
Conflict‑of‑Interest Exclusion
Given configured conflict‑of‑interest rules and candidate affiliations When evaluating eligibility Then exclude candidates matching any conflict rule with code CONFLICT_OF_INTEREST And include a per‑candidate list of triggered conflict rule IDs in the response metadata
Deterministic Scoring and Ranking API
Given an identical request payload and unchanged system state When the ranking API is invoked multiple times Then the returned candidates, scores, and order are identical across calls And ties are broken deterministically in this order: role_priority (higher first), current_load (lower first), travel_distance (shorter first), next_availability (earlier first), candidateId (lexicographically smaller first) And the API returns HTTP 200 with JSON containing requestId, generatedAt, and candidates[] items with candidateId, score (0–100), rank (1..N), reasons[] And P95 latency for requests with <= 200 candidates is <= 300 ms
Explainability and Disqualification Audit
Given a ranking request When results are returned Then each included candidate provides reasons[] entries with factor names and weight percentages that sum to 100% And excluded candidates are listed under disqualified[] with candidateId and disqualificationCodes[] while omitted from candidates[] And the response includes metadata echoing evaluation parameters used (e.g., distanceMetric, capacityWindow, ruleVersion) for auditability
Auto‑Routing & Assignment Logic
"As a listing agent, I want conflicting requests automatically routed to the best on‑duty teammate so that the buyer keeps their time and the seller experiences uninterrupted availability."
Description

Automatically reroute conflicting showing requests to the highest‑ranked eligible teammate while preserving the requested time. Support strategies like round‑robin, load balancing, and primary/backup assignment. Place a temporary hold on the slot during confirmation, finalize assignment on accept (or auto‑accept per policy), and update the showing’s assigned agent and instructions across TourEcho.

Acceptance Criteria
Route to Highest-Ranked Eligible Teammate
- Given a requested time that violates the primary agent’s quiet hours or buffers and at least one teammate meets coverage rules, When the request is submitted, Then the system assigns the showing to the highest-ranked eligible teammate and preserves the requested time. - Given multiple eligible teammates with different ranks, When routing is executed, Then the teammate with the highest rank among eligible candidates is selected. - Given the selected teammate is found to have a conflict during final validation, When routing is finalized, Then that teammate is excluded and the next highest-ranked eligible teammate is selected without changing the requested time. - Given routing executes under normal load, When the request is submitted, Then routing completes within 2 seconds at p95. - Given the assignment is made, When the user views the showing details, Then the assigned agent field reflects the selected teammate and the scheduled time remains unchanged.
Round-Robin Assignment Sequencing
- Given assignment strategy is set to Round Robin and a roster of N eligible teammates, When N consecutive compatible requests are received, Then each teammate receives exactly one assignment in order without repetition. - Given some teammates are ineligible due to quiet hours or buffers, When routing occurs, Then the sequence skips ineligible teammates and continues with the next eligible while preserving the round-robin order state. - Given two requests arrive within 100 ms, When round-robin position is updated, Then assignments are deterministic and no single teammate receives both requests. - Given the round-robin strategy is enabled, When the service restarts, Then the current pointer state is persisted and continues from the last assigned teammate.
Load-Balanced Assignment by Active Capacity
- Given assignment strategy is set to Load Balancing and multiple teammates are eligible, When routing a request, Then the teammate with the lowest count of future accepted showings in the next 7 days is selected. - Given a tie on load between two or more eligible teammates, When routing occurs, Then the highest-ranked teammate among the tied candidates is selected. - Given the initially selected teammate declines or times out, When re-routing occurs, Then the next lowest-load eligible teammate is selected while preserving the requested time. - Given an assignment is finalized, When loads are recalculated, Then the selected teammate’s load increases by one and others remain unchanged.
Primary/Backup with Auto-Accept Finalization
- Given a primary agent is unavailable due to quiet hours or buffers and a configured backup exists, When a request is submitted, Then a temporary hold is placed on the requested slot for the backup and the backup is notified. - Given the backup’s policy is Auto-Accept, When the request is routed, Then the assignment is finalized immediately and the hold converts to a confirmed booking. - Given the backup declines or does not respond within the default hold TTL of 10 minutes, When the TTL expires, Then the hold is released and the system routes to the next eligible teammate or marks the request as Action Required if none exist. - Given a manual accept by the backup within the TTL, When accepted, Then the assignment finalizes and an audit log entry records the actor and timestamp.
Temporary Hold and Concurrency Controls
- Given a Hold TTL of 10 minutes and two requests target the same listing and time, When the first request enters confirmation, Then a hold is created and the second request is placed in a wait state or offered alternate times. - Given the held request is declined or times out, When the hold is released, Then the next waiting request is processed for the same time without creating a double booking. - Given multiple concurrent requests arrive for the same slot, When holds are applied, Then at most one confirmed or held booking exists for that slot at any time. - Given a hold exists, When viewing the listing calendar, Then the slot displays as Held to internal users for the duration of the TTL.
System-Wide Updates and Notifications on Finalization
- Given an assignment is finalized (accept or auto-accept), When viewing the showing in TourEcho web, mobile, and calendar integrations, Then the assigned agent, instructions, and contact info are updated consistently across all surfaces within 30 seconds at p95. - Given finalization occurs, When notifications are sent, Then the assigned agent, seller, and requesting agent receive messages containing the confirmed assignee, time, and updated access instructions. - Given finalization occurs, When inspecting the audit log, Then an entry records the routing strategy used, eligibility reasons, ranking/load metrics, hold timestamps, and the finalizing actor or policy.
Consent & Notification Workflow
"As a teammate, I want timely notifications and a simple accept/decline flow so that I can take or pass a reassigned showing without confusion or delays."
Description

Provide a clear accept/decline workflow with SLA timers and multichannel alerts (in‑app, SMS, email) to the proposed cover agent and the original agent. If declined or timed out, automatically advance to the next candidate per rules. Notify the buyer’s agent only when the time or access instructions change; otherwise, preserve a seamless experience for sellers and buyers.

Acceptance Criteria
SLA-timed accept/decline for proposed cover agent
Given a showing request violates the original agent’s quiet hours or buffers and a cover agent is proposed When the proposal is created Then an SLA response timer starts at 10 minutes by default (configurable 2–30 minutes at the team level) And the cover agent sees clear Accept and Decline actions with visible time remaining And accepting within the SLA reassigns the showing to the cover agent within 5 seconds and notifies the original agent on all channels And declining within the SLA closes the proposal and triggers auto-advance rules
Auto-advance to next candidate on decline/timeout
Given a cover agent proposal is Declined or the SLA timer expires without a response When the event occurs Then the system advances to the next eligible candidate per the team’s ranking rules within 5 seconds And the SLA timer restarts for the new candidate And the process repeats until a candidate Accepts or the list is exhausted And if exhausted, the booking remains preserved as Needs Assignment, the original agent and team owner are alerted, and no buyer’s agent notification is sent unless time or access changes
Multichannel alerts to cover and original agents
Given a cover agent proposal is created or changes state (Accepted, Declined, Timed Out, Advanced) When notifications are dispatched Then the proposed cover agent and the original agent each receive in-app, SMS, and email alerts within 30 seconds of the state change And each alert contains property address, requested start time with timezone, SLA time remaining (if applicable), and a deep link to the decision screen And alert delivery outcomes are logged per channel with success or failure status And failed SMS/email attempts are retried up to 2 times with exponential backoff
Buyer’s agent change-only notification policy
Given a reassignment workflow is in progress for a booked showing When only the assigned agent changes with no change to start time or access instructions Then the buyer’s agent receives no notification And when the start time changes by any amount or access instructions change Then the buyer’s agent receives a single consolidated notification within 30 seconds containing the updated start time and/or access instructions And multiple changes within a 2-minute window are deduplicated into one notification
Secure, authenticated accept/decline actions
Given a cover agent receives an alert with Accept/Decline When the agent opens the deep link Then the agent must be authenticated or present a valid signed one-time token that expires in 15 minutes And expired or invalid tokens result in a blocked action with a prompt to authenticate And successful Accept/Decline actions are acknowledged in-app within 2 seconds and reflected in assignment state within 5 seconds
Audit trail and SLA metric reporting
Given a showing undergoes the consent workflow When events occur (proposal created, alerts sent per channel, accept, decline, timeout, auto-advance, final assignment, buyer notification) Then each event is logged with UTC timestamp, actor, channel, outcome, and related showing/proposal IDs And an audit view/API returns the ordered timeline with computed durations (e.g., response time vs. SLA) within 1 second for queries under 100 events And logs are retained for at least 365 days
Timezone and quiet-hours compliance in consent flow
Given a team has defined listing timezone, quiet hours, buffers, and coverage rules When generating proposals and notifications Then all timestamps render in the listing’s timezone including timezone abbreviation in outbound messages And only on-duty eligible candidates per coverage rules are proposed And original agent notifications during off-duty periods follow the team’s off-duty channel preferences (e.g., FYI-only) And no action-required alerts are sent to off-duty agents who are not candidates
Calendar & Availability Integration
"As a showing assistant, I want rebalanced showings to appear on my calendar with buffers and travel time so that I can plan my day accurately."
Description

Two‑way sync with Google, Outlook, and iCloud to read free/busy and write confirmed rebalanced showings to the assigned teammate’s calendar. Honor privacy settings, insert buffers and optional travel time, detect double‑booking, and propagate updates in near‑real‑time via webhooks to keep TourEcho and external calendars consistent.

Acceptance Criteria
Cross-Provider Free/Busy Read for Rebalance Window
Given a teammate with connected Google, Outlook, or iCloud calendars and configured quiet hours and buffers When TourEcho requests free/busy for a candidate showing window Then TourEcho retrieves free/busy from the provider and normalizes it to the teammate’s time zone And TourEcho excludes periods marked Busy, Tentative, Out of Office, and the teammate’s configured quiet hours and buffers from available slots And the availability query completes within 2 seconds p95 per provider response And if any provider call fails, TourEcho surfaces a retriable error and logs provider/type/latency without exposing sensitive event details
Privacy-Aware Event Write to Assigned Teammate Calendar
Given a rebalanced showing is assigned to a teammate with a connected calendar and a privacy preference (private, masked, full) When TourEcho writes the event to the external calendar Then if privacy=private, the event visibility is set to Private (or equivalent), title is "Busy", and no location/description is stored And if privacy=masked, the title is "Showing – <Street + City>", location is the city only, and no client names or contact data are stored And if privacy=full, the title is "Showing – <Full Property Address>", and description contains the showing reference ID only And in all cases, the provider’s visibility/sensitivity field matches the intended privacy level and the operation returns the external event ID
Automatic Buffers and Optional Travel Time in Event Duration
Given team-level settings for pre-buffer and post-buffer minutes and an optional travel time feature toggle And a rebalanced showing is confirmed for a teammate When TourEcho creates calendar entries Then it creates a primary event for the exact showing window And it creates adjacent Busy events for the pre-buffer and post-buffer with titles "Buffer" (or "Busy" if privacy=private) And if travel time is enabled, it creates Busy events immediately before/after buffers labeled "Travel" computed from current route estimate And total busy coverage equals travel (optional) + pre-buffer + showing + post-buffer + travel (optional) And all created events inherit the teammate’s privacy preference
Double-Booking Detection and Prevention on Event Write
Given TourEcho has just read free/busy and selected a slot When another external event is added that overlaps the slot before TourEcho writes Then TourEcho uses provider concurrency controls (ETag/If-Match or change key) to detect the conflict And TourEcho does not create a double-booked event And TourEcho re-attempts assignment to the next eligible on-duty teammate per coverage rules and records the reroute in the audit log And the requesting agent is notified that the showing was reassigned due to a conflict
Near-Real-Time Webhook Sync from External Calendars to TourEcho
Given TourEcho has subscribed webhooks for Google, Outlook, and iCloud-integrated accounts When a user moves, updates, or deletes a TourEcho-managed event in the external calendar Then TourEcho receives and processes the webhook and updates the internal showing state within 120 seconds p95 And the corresponding buffers/travel blocks are adjusted or removed to mirror the external change And TourEcho preserves the internal-to-external event mapping and records the source-of-truth as "External" for that change And if the webhook delivery fails, TourEcho retries per backoff policy and reconciles on next full sync
Outbound Update Propagation from TourEcho to External Calendars
Given a confirmed showing managed by TourEcho with an associated external event ID When the showing is rescheduled, reassigned, or canceled in TourEcho Then the external calendar event (and any buffer/travel blocks) are updated or deleted to match within 60 seconds p95 And the external event keeps a stable event ID (or provider-specific change key) across updates And the event’s time zone and visibility settings remain consistent with teammate preferences And failures are retried with exponential backoff and surfaced in an operations dashboard
Time Zone and Daylight Saving Accuracy Across Providers
Given a teammate, listing, and requester may be in different time zones And a showing occurs on a Daylight Saving transition week When TourEcho creates or updates the event on Google, Outlook, or iCloud Then the external event start/end reflect the intended local time at the listing location And the teammate’s calendar displays the correct converted local time with no ±1 hour offset And iCal/ICS data includes the proper TZID and recurrence/exception rules as needed And automated tests verify representative zones (e.g., America/New_York, America/Denver, Europe/Berlin) and DST boundaries
Fallback & Escalation Matrix
"As a coordinator, I want a defined escalation path when coverage isn’t available so that requests are resolved quickly without manual scrambling."
Description

When no eligible teammate is available or acceptance SLAs expire, execute a configurable escalation path: escalate to on‑call broker, propose alternative times to the buyer’s agent, request one‑time policy exceptions, or prompt reschedule. Preserve seller experience by avoiding gaps in availability and ensure requests do not stall.

Acceptance Criteria
Immediate Escalation When No Eligible Teammate
Given the requested time conflicts with all team members’ quiet hours/buffers and an on-call broker is configured When the buyer’s agent submits a showing request Then the system assigns the request to the on-call broker as step 1 within 5 seconds And sends the on-call broker an actionable notification (push/email/SMS) with Accept/Decline links And marks the booking as Pending-Covered and shows “Awaiting broker confirmation” to the buyer’s agent And records an audit entry with timestamp, actor, and step name
SLA Expiry Triggers Escalation
Given the acceptance SLA for the current assignee is 15 minutes and the request is Pending-Covered When the SLA expires without acceptance Then the system executes the next configured escalation step within 60 seconds And notifies the next assignee/party per matrix settings And appends an audit log entry for the SLA breach and step transition And the buyer’s agent continues to see the request as Pending-Covered
Propose Alternative Times to Buyer’s Agent
Given the escalation step is “Propose alternatives” and no eligible coverage exists for the requested time When the step executes Then the system sends the buyer’s agent the next 3 available slots within 48 hours that satisfy buffers and quiet hours And provides 1‑click selection that confirms instantly upon selection And if no selection is made within 10 minutes, then escalate to the next step And log the outreach and any selections with timestamps
One-Time Policy Exception Request
Given the requested time violates the listing’s quiet hours and the matrix step is “Request one‑time policy exception” When the step executes Then the system sends an approval request to the listing agent and on‑call broker with a 10‑minute SLA And if approved, the showing is confirmed immediately and the exception is recorded with scope, approver, and expiry And if declined or the SLA expires, then escalate to the next step And the buyer’s agent is notified of the outcome
Final Reschedule Prompt After Escalations Exhausted
Given all configured escalation steps have executed without a successful acceptance or exception When the final step is reached Then the system sends the buyer’s agent a reschedule prompt with 5 alternative time options and a free‑text proposal field And marks the original request as Expired if no action is taken within 24 hours And notifies the seller of the status without suggesting lack of coverage And logs closure with reason “Escalation exhausted”
Configurable Matrix Order, Enablement, and Safe Changes
Given an admin configures an escalation matrix with an ordered list of steps and per‑step SLAs When a new showing request is created Then the system snapshots the current configuration for that request’s lifecycle And steps execute strictly in configured order, skipping any disabled steps And configuration changes apply only to requests created after the change And all step transitions and notifications are timestamped and visible on a single timeline
Preserve Seller Experience Throughout Escalation
Given a showing request is in escalation When any step is executing or waiting for an SLA Then the seller portal displays the request as Covered/In Progress (not Uncovered) And external availability does not show gaps for the listing And the buyer’s agent receives at most one notification per step, avoiding duplicates And the requested time holds a tentative slot to prevent double‑booking until confirmed or expired
Admin Configuration & Policy Management
"As an operations admin, I want an easy way to configure coverage policies and schedules so that Team Rebalance aligns with our team structure and wellness guidelines."
Description

Admin UI and APIs to manage quiet hours, buffers, on‑duty schedules, team rosters, coverage rules, auto‑accept policies, and escalation settings. Support role‑based permissions, templates by team or office, per‑listing overrides, and audit of configuration changes to ensure governance and ease of rollout.

Acceptance Criteria
RBAC Enforcement for Admin Policy Changes
Given a user role, When attempting to create/update/delete any policy object (quiet hours, buffers, schedules, coverage rules, auto-accept, escalation), Then only users with Admin or Broker Owner permissions succeed; all others receive 403 Forbidden with error code RBAC_001. Given a user with granular permission Manage Team Templates but not Manage Office Templates, When editing a team template, Then the change succeeds; When editing an office template, Then the request is blocked with 403 RBAC_002. Given an unauthenticated API request, When calling any admin mutation endpoint, Then the response is 401 Unauthorized.
Quiet Hours and Buffers Policy Templates
Given an Admin defines a template at team or office scope with quietHours (days, start/end, timezone) and buffers (pre/post minutes), When saving, Then values are validated: minutes 0–180, start != end unless 24h, timezone valid, no overlapping ranges. Given a new listing created under a scope with an active template, When the listing is saved, Then the template’s values are applied as the listing defaults. Given a template update with applyForward=true, When saved, Then listings inheriting this template and without overrides adopt the new values within 60 seconds. Given a template update with applyForward=false, Then only newly created listings adopt the new values. Given a template is versioned, When rollback to version N is requested, Then version N becomes current and version number increments.
On-Duty Schedules and Coverage Rules Configuration
Given an Admin creates on-duty shifts (member, days, start/end, timezone, exceptions), When saving, Then overlaps for the same member are rejected with 400 SCHEDULE_OLAP; DST transitions are normalized to the defined timezone. Given coverage rules define eligible role groups and assignment strategy (round-robin, least-recent), When saving, Then invalid combinations (e.g., empty eligible set) are rejected with 400 RULES_001. Given a timeslot where the requester’s quiet hours would be violated, When simulating assignment, Then the result routes to an eligible on-duty teammate per rule and includes rationale. Given a schedule/rule set, When running the policy simulator for a specified timestamp, Then the system returns a deterministic assignee preview and rule evaluation trace.
Per-Listing Overrides Precedence and Reversion
Given a listing inheriting from a team template, When an override is set for quiet hours or buffers, Then the listing’s effective policy uses the override and not the template. Given overridden fields exist, When the template updates, Then overridden fields on the listing remain unchanged. Given a listing with overrides, When an Admin selects Revert to Template for a field, Then the override is removed and the effective value reflects the current template. Given a listing with mixed overrides and inherited values, When viewing the Effective Policy UI, Then each field is labeled with its source: Office Template, Team Template, Listing Override.
Auto-Accept and Escalation Policies
Given an Admin defines auto-accept rules (time windows, buffer compliance, max/day, lead source), When saving, Then conflicting rules (e.g., max/day < 0) are rejected 400 POLICY_001. Given a booking request that satisfies auto-accept rules and coverage, When submitted, Then the booking status is Accepted and no human action is required. Given a booking request that does not satisfy auto-accept rules, When submitted, Then the status is Pending and the escalation timers start per policy. Given escalation tiers (5m teammate, 15m manager), When each timer expires without acceptance, Then notifications are sent to the next tier and assignment escalates; audit includes tier transitions. Given a test request in the simulator, When evaluated, Then the outcome indicates Auto-accept or Escalate with the specific failing/passing rule IDs.
Admin Configuration APIs: CRUD, Validation, Idempotency
Given versioned REST endpoints with OpenAPI, When a client submits well-formed POST/PUT/PATCH, Then schema validation passes and the resource is created/updated; invalid payloads return 400 with machine-readable codes. Given OAuth2 access tokens with scopes, When a token lacks required scope, Then the API returns 403 and no mutation occurs. Given an Idempotency-Key header on a mutation, When the same key is reused within 24h with identical payload, Then the API returns the original response without creating duplicates; mismatched payload returns 409 IDEMPOTENCY_MISMATCH. Given a resource with ETag X, When a client sends If-Match Y != X on update, Then the API returns 412 Precondition Failed and no change is applied. Given a successful mutation, When reading effective policy, Then the change is visible via GET endpoints within 60 seconds.
Audit Logging and Change History
Given any create/update/delete of policies/templates/schedules/rules, When executed, Then an immutable audit event is recorded with actorId, role, tenant/team/listing IDs, timestamp (UTC), entity type/id, before/after values, and correlationId. Given audit events exist, When an Admin filters by entity id, actor, date range, or action, Then matching events are returned sorted by timestamp; exports available as CSV and JSON. Given an entity’s history view, When opened, Then a field-level diff is displayed for each change with the reason (if provided) and links to related entities. Given a previous version is selected, When Restore is executed, Then a new version is created reflecting the previous state; the audit log records the restore action with reference to the source version. Given retention policy 24 months, When the boundary is reached, Then events are archived but remain queryable to superadmins; hard deletion is disallowed for non-superadmins.

Calendar Lock

Write quiet-hour holds and travel blocks to Google/Outlook/iCloud and read external conflicts back into TourEcho. Detect overlaps early, auto-suggest conflict‑free slots, and stop double‑booking across tools with a single, trusted source of truth.

Requirements

Calendar OAuth Connect
"As a listing agent, I want to securely connect my Google/Outlook/iCloud calendars to TourEcho so that the system can keep my availability in sync without exposing my private event details."
Description

Implement secure OAuth-based connections to Google Calendar, Microsoft 365/Outlook, and iCloud (CalDAV) to link agent accounts and establish least-privilege scopes for reading free/busy and creating/updating TourEcho hold events. Support multi-calendar selection per account, granular consent, token encryption and rotation, refresh handling, unlink/relink flows, and detection of disconnected or expired tokens. Provide UI to choose which calendars are read vs. written, and store user preferences per profile. Ensure SOC2-ready logging, auditability, and compliance with provider policies and rate limits.

Acceptance Criteria
Google/Microsoft OAuth Connection with Least-Privilege Scopes
- Given an authenticated TourEcho user, When they initiate a connection to Google or Microsoft 365, Then the consent request includes only scopes required to read free/busy and create/update/delete TourEcho hold events, and excludes any unrelated scopes (e.g., mail, contacts, files). - Given the user successfully grants consent, When the provider redirects back to TourEcho, Then tokens are stored encrypted at rest using KMS-managed keys, the connection status displays Connected within 5 seconds, and the provider account email and last-verified timestamp are shown. - Given the user denies consent or an OAuth error occurs, When the redirect returns, Then no token is persisted, the status remains Not Connected, and the user sees a retriable error message with a Re-try Connect action. - Given a successful connection, When the system inspects granted scopes, Then it validates they match the least-privilege set and if broader, prompts the user to re-consent with reduced scopes before enabling sync. - Given a successful connection, When the user signs out of TourEcho, Then the provider connection persists on their profile and is available upon next sign-in.
iCloud CalDAV Connection with Secure Token Handling
- Given an authenticated user, When they choose iCloud and supply an app-specific password, Then the system connects via CalDAV, discovers available calendars, and lists them within 10 seconds. - Given valid credentials, When the connection is saved, Then only an encrypted credential/token is stored; the raw password is never logged, retrievable, or displayed. - Given invalid credentials, When the connection attempt is made, Then the user sees an Invalid iCloud app-specific password error and no credential is stored. - Given a previously connected iCloud account, When the user revokes the app-specific password at Apple, Then the next sync marks the connection Disconnected within 5 minutes and prompts the user to relink. - Given the iCloud connection is active, Then only CalDAV endpoints are used; no contacts, mail, or files access occurs.
Multi-Calendar Selection and Read/Write Preferences
- Given a connected calendar account, When the user opens Calendar Preferences, Then all calendars for that account are listed with independent toggles for Read (free/busy) and Write (holds). - Given the user updates selections, When they click Save, Then preferences persist to their profile and are applied to scheduling within 2 minutes. - Given no Write calendars are selected, Then TourEcho does not attempt to create hold events and displays a non-blocking reminder to choose a Write calendar. - Given a calendar is deselected for Read or Write, Then subsequent syncs exclude it and no new writes occur, while existing mappings remain intact. - Given multiple accounts are connected, Then preferences are managed and persisted per account separately.
Free/Busy Sync and External Conflict Detection
- Given at least one Read calendar is selected, When availability is requested for a 14-day window, Then TourEcho fetches free/busy across all selected calendars and returns conflict-free time suggestions within 2 seconds under normal load. - Given a new external event is added to a Read calendar, When the next sync runs, Then conflicting times are excluded from suggestions within 5 minutes. - Given provider rate limits or transient errors (e.g., HTTP 429/5xx), When fetching free/busy, Then the system applies exponential backoff, retries up to 3 times, and falls back to cached data no older than 10 minutes with a non-fatal warning. - Given no calendars are connected, When availability is requested, Then the system indicates calendar access is required and does not generate suggestions. - Given a previously connected account is revoked at the provider, When free/busy fetch fails with invalid/expired token, Then the connection is marked Disconnected within 5 minutes and the user is prompted to relink.
Hold Event Create/Update/Delete Propagation
- Given at least one Write calendar is selected, When a user creates a TourEcho quiet-hour hold or travel block, Then an external event is created on each selected Write calendar with title prefixed "TourEcho Hold", status Busy, and the external event ID is stored in TourEcho. - Given a hold is updated in TourEcho (time/title/notes), Then the corresponding external event(s) are patched to match within 30 seconds. - Given a hold is deleted in TourEcho, Then the corresponding external event(s) are deleted within 30 seconds; if deletion fails due to permissions, the system reports the error and rolls back the local delete. - Given an equivalent hold already exists externally (matching UID or extended property), When creating a new hold, Then the system performs an idempotent upsert and does not create duplicates. - Given a Write calendar is later deselected, Then no further updates are pushed to that calendar and the user is warned existing external events will remain unless removed manually.
Token Refresh, Rotation, and Expiry Detection
- Given a connected Google or Microsoft account, When an access token nears expiry, Then the system refreshes it silently using the refresh token before expiry without user intervention. - Given a refresh attempt fails due to revoked or expired refresh token, Then the connection status changes to Disconnected within 5 minutes and the user receives an in-app and email notification with a Relink action. - Given tokens are stored, Then they are encrypted at rest with KMS-managed keys, keys are rotated at least every 90 days, and decryption access is restricted to the calendar service. - Given deployments or restarts occur, Then tokens remain valid and are never written to logs in plaintext in any environment. - Given a successful token refresh, Then the last-verified timestamp updates and an audit log entry is recorded.
Audit Logging and Policy Compliance
- Given any of the following occur: connect, consent change, calendar selection change, unlink, relink, token refresh failure, read/write error, Then an immutable audit event is recorded with timestamp, actor, provider, affected calendars, outcome, and correlation ID; no secrets or token values are logged. - Given audit logs exist, When queried by an authorized admin, Then events can be filtered by user, provider, action, and date range, and exported to CSV. - Given provider policy requirements and rate limits, Then API calls include required headers, adhere to published rate limits, and exceeding limits triggers backoff and alerts without unbounded retries. - Given a 365-day data retention policy, Then audit events are retained for at least 365 days and purged thereafter with a logged purge event. - Given a user account is deleted from TourEcho, Then all associated tokens are revoked/removed within 15 minutes and an audit entry records the revocation.
Bi-Directional Sync Engine
"As an agent, I want TourEcho to keep my holds and external conflicts synchronized in near real time so that I always have a single, accurate source of truth across tools."
Description

Deliver an idempotent, bi-directional synchronization service that writes quiet-hour holds and travel blocks from TourEcho to external calendars and reads external busy/free intervals back into TourEcho. Implement event mapping with stable external IDs, recurrence support, update/delete propagation, and conflict-safe merges. Use push mechanisms where available (Google watch channels, Microsoft Graph subscriptions) and efficient polling for iCloud, with delta sync and backoff for rate limits. Normalize data to UTC with timezone fidelity, handle DST transitions, and guarantee exactly-once semantics via deduplication keys and transactional outboxes. Provide configurable sync cadence and immediate sync on material user actions.

Acceptance Criteria
Idempotent Quiet-Hour Write to External Calendars
Given a user has connected Google, Microsoft 365/Outlook, and iCloud calendars and has no external event bearing the TourEcho dedup key for a new quiet-hour hold When the user creates a quiet-hour hold in TourEcho for a specific time range and sync is triggered immediately Then exactly one external event is created per connected provider with start/end that exactly match the hold when converted to UTC, preserving the original timezone And the external event stores a stable external ID and a TourEcho deduplication key in the provider’s custom property mechanism And re-sending the same create payload up to 3 times (including after transient 5xx/timeout retries) produces no duplicate external events and preserves the same external ID And the operation is recorded as successful with a single outbox job completion and zero duplicate writes
Bi-Directional Update Propagation with Stable Mapping
Given a TourEcho hold is mapped to an external event via stored external ID and dedup key When the hold’s time window is updated in TourEcho Then the corresponding external event is updated within 30 seconds with unchanged external ID and advanced provider ETag/revision And the TourEcho record stores the new external revision/version Given the external event’s time is updated in the provider When the sync engine receives a push notification or next poll executes Then the TourEcho hold reflects the external change within 60 seconds and the mapping remains intact (no new entity created) And if both sides change within a 2-minute window, the engine applies last-write-wins based on authoritative lastModified timestamps and produces a single consistent record (no duplicates)
Recurrence Series and Exception Handling (with DST Fidelity)
Given a weekly quiet-hour series is created in TourEcho for 8 occurrences at a fixed local time When it is synced to external calendars Then a single recurring series is created per provider with an RRULE (or equivalent) and instance start/end match the intended local time across DST transitions (local time preserved, UTC offset adjusted) And instances within the next 90 days are materialized in TourEcho within 60 seconds Given one instance is skipped in TourEcho When sync runs Then the single external occurrence is canceled without affecting other instances Given a single external occurrence is moved by 30 minutes When sync runs Then the corresponding instance in TourEcho is moved by 30 minutes while other instances remain unchanged And editing the series end date in TourEcho updates the external series while preserving existing exceptions
Deletion and Tombstone Behavior
Given a TourEcho hold mapped to an external event exists When the hold is deleted in TourEcho Then the mapped external event is deleted/canceled within 30 seconds and is not recreated by subsequent syncs And a tombstone is recorded in TourEcho for at least 30 days Given the external event is deleted in the provider When sync runs Then the mapped TourEcho hold is marked deleted within 60 seconds and a tombstone is recorded, preventing recreation during the retention window And repeated delete requests (retries) are idempotent and do not produce errors or recreate events
Push Channels, Polling, and Backoff Policies
Given Google and Microsoft accounts are connected When subscriptions/watch channels are established Then each subscription is active and auto-renewed at least 10 minutes before expiry, and lost channels are detected and re-subscribed within 2 minutes And an external change triggers a push that initiates a delta sync within 10 seconds of receipt Given an iCloud account is connected When no push is available Then polling occurs at the configured cadence (default 5 minutes) and escalates to 30-second polling for 10 minutes after a material user action in TourEcho, then returns to baseline cadence Given a provider returns 429 or 503 When the engine retries Then exponential backoff with jitter is applied starting at 1 minute, doubling up to a max of 15 minutes, with no more than one in-flight retry per calendar, and normal cadence resumes after a successful call
Delta Sync and Exactly-Once Semantics via Outbox and Dedup Keys
Given an initial full sync has completed and a valid delta token is stored When subsequent syncs run Then only changes since the last delta token are fetched and applied, with unchanged items skipped And if a delta token is expired or invalid, the engine performs a bounded full sync (last 90 days) and issues a fresh token without creating duplicates Given duplicate notifications are delivered for the same change up to 3 times When the outbox processes them Then the deduplication key ensures the change is applied exactly once, resulting in at most one external API write and one state mutation in TourEcho And transient failures that trigger job retries do not create duplicate external events or internal records
Conflict-Safe Merge and Overlap Detection for Availability
Given TourEcho has scheduled showings and external calendars contain overlapping busy events When sync runs Then overlapping intervals are marked unavailable in TourEcho within 60 seconds and affected showings are flagged as conflicted And the availability resolver excludes the union of overlapping intervals when proposing slots (no double-booking) Given concurrent edits from both sides affect different fields (e.g., external notes, TourEcho time) When merge occurs Then field-level precedence preserves both changes using per-field lastModified where available; if unavailable, time/date fields take precedence over non-time metadata, and no duplicate entities are created
Quiet Hours & Travel Blocks Management
"As a broker-owner, I want my team to set and manage quiet hours and travel buffers centrally and per listing so that showings don’t get scheduled during times we can’t accommodate."
Description

Create in-app tools to define global and per-listing quiet hours, recurring rules (e.g., weekdays 8–10am), date exceptions, and one-off blackout windows. Allow agents to add travel blocks with origin/destination, automatic buffers, and labels that clearly identify TourEcho-created events on external calendars. Support color/category tagging, per-listing overrides, and quick templates. Ensure accurate timezone handling for mobile agents, preview impacts before saving, and provide bulk apply to multiple listings. All holds should be written outward with a consistent naming convention and safely updated/removed when rules change.

Acceptance Criteria
Real-Time Conflict Guard
"As a listing agent, I want TourEcho to block double-bookings and warn me before I confirm a conflicting time so that I don’t disappoint buyers or sellers."
Description

Validate every new or modified showing against internal showings, external busy events, quiet hours, and travel holds, surfacing conflicts instantly in the UI and API. Provide clear, actionable messages, one-click alternative suggestions, and an override path with explicit confirmation and audit tagging to prevent accidental double-booking. Re-check conflicts on external calendar changes and auto-cancel tentative holds if required. Implement concurrency controls and short-lived optimistic locks to avoid race conditions during high-traffic scheduling windows.

Acceptance Criteria
Real-Time Conflict Check on Create/Modify
Given a user submits a create or modify showing request (start, end, listing, agent) via UI or API When validation runs before persisting any changes Then the system checks for overlaps against: (a) internal TourEcho showings, (b) external calendar busy/OoO/working-elsewhere events, (c) quiet-hour holds, and (d) travel blocks for the assigned agent(s) And partial overlaps (start within, end within, or enveloping) are treated as conflicts And adjacent events with a configured buffer (default 15 minutes) are treated as conflicts if the buffer would be violated And no conflicting showing or hold is persisted And 95th percentile validation latency is ≤ 1000 ms for UI requests and ≤ 600 ms for API requests
Conflict Messaging in UI and API
Given a conflict is detected during showing validation When the UI renders the result Then the user sees an inline error with: conflict type (Internal Showing | External Busy | Quiet Hours | Travel Block), source calendar/display name, start–end times in the user’s timezone, and actions: View Alternatives and Override And the message includes a unique error code and the conflicting event id(s) where applicable And the API responds with HTTP 409 and a JSON body containing: code, type, source, start, end, timezone, conflictEventIds[], and actions{suggestions:boolean,override:boolean} And messages are consistent between UI and API for the same conflict id
One-Click Alternative Slot Suggestions
Given a conflict is returned for a requested slot When the user clicks View Alternatives or the client calls the suggestions endpoint Then the system returns ≥ 5 conflict-free slots within the next 14 days, sorted by soonest start time And all suggested slots are pre-validated against internal showings, external busy events, quiet hours, travel blocks, and configured buffers And suggestions honor the listing’s timezone and agent’s working hours window And 95th percentile time-to-suggestions is ≤ 1500 ms And selecting a suggestion auto-fills and succeeds without additional conflicts (barring new external changes during submission)
Override with Explicit Confirmation and Audit Tagging
Given a conflict is present When the user chooses Override Then a confirmation modal summarizes each conflict (type, source, start–end) and requires: (a) selecting a reason (list or Other), and (b) checking an acknowledgment checkbox And the booking does not proceed unless both fields are provided And on confirm, the showing is created and tagged override=true And an audit log entry is written with: timestamp, userId, showingId, conflictIds[], reason, ip, source (UI/API), and affected calendar providers And the API requires override=true and reason to succeed; otherwise returns HTTP 400 with a validation error
Optimistic Locking and Concurrency Control
Given multiple concurrent create/modify requests target the same listing and overlapping time range When requests are processed Then a short-lived optimistic lock (TTL 5s) is acquired per (listingId, normalized time bucket) at validation start And exactly one request may proceed to persistence; others fail with HTTP 409 and lockId in response And locks are released on commit or TTL expiry, whichever comes first And under a load test of 20 parallel requests for the exact slot, 1 succeeds and 19 return conflict, with no duplicate showings persisted And lock acquisition/cleanup is logged with correlation ids
External Calendar Change Recheck and Tentative Auto-Cancel
Given an external calendar webhook/poll indicates a change to events overlapping a TourEcho showing or tentative hold When the change is received Then impacted records are revalidated within 60 seconds at p95 And tentative holds that become conflicted are auto-canceled with status=canceled_conflict and reason including provider and event id And the agent receives a notification that the hold was canceled with a link to suggestions And confirmed showings that become conflicted are not auto-canceled; they are flagged at_risk with a visible alert and override guidance And all actions are recorded in the audit log with the external change id
Quiet Hours and Travel Blocks Enforcement
Given quiet hours and travel blocks are configured for an agent When a showing request overlaps any quiet hour or travel block (including pre/post travel buffer, default 15 minutes) Then the request is rejected as a conflict of type Quiet Hours or Travel Block accordingly And quiet-hour holds and travel blocks are written to the connected provider calendar and visible there within 30 seconds at p95 And time zone and daylight saving transitions are respected so that enforced windows match local times And adjacency within the configured buffer is also rejected
Smart Slot Suggestions
"As a showing coordinator, I want the system to propose the best available slots that fit everyone’s calendars and travel time so that I can schedule faster with fewer back-and-forth messages."
Description

Generate conflict-free time windows that respect quiet hours, external availability, listing constraints, and travel time between appointments. Integrate a routing API to compute realistic buffers based on traffic and location, honor user preferences (work hours, minimum gap, max daily showings), and return ranked suggestions both in UI and via API. Support multi-attendee constraints (agent + co-agent) and batch scenarios (suggest slots for multiple listings). Provide fallbacks when data is incomplete and adapt suggestions dynamically as calendars change.

Acceptance Criteria
Single Agent: Quiet Hours, External Conflicts, Listing Constraints, and Travel Buffers
- Given agent quiet hours 19:00–08:00 in the agent’s time zone and a requested date range, When suggestions are generated, Then no suggested slot starts or ends within quiet hours. - Given external events from Google/Outlook/iCloud are synced, When suggestions are generated, Then no suggested slot overlaps any external event plus required buffer. - Given listing constraints (blackout windows, earliest-notice lead time, allowed days), When suggestions are generated, Then all suggested slots comply with these constraints. - Given locations for the prior appointment and the listing address and current traffic, When suggestions are generated, Then each slot includes a buffer >= routing API ETA + user minimum gap. - Then at least one slot is returned when at least one feasible slot exists in the requested range; otherwise return an empty list with reason codes. - Then all times are returned normalized to the listing’s local time zone and include UTC offsets.
Multi-Attendee Availability Merge (Agent + Co-Agent)
- Given primary agent and co-agent calendars (internal + external) and an identical requested date range, When suggestions are generated, Then every suggested slot is free for both attendees including buffers. - When no common availability exists within the horizon, Then the response contains no slots and includes reason code "no_common_availability" and the earliest next day with potential availability. - Then all-day holds on either calendar fully block that day from suggestions. - Then each suggestion payload includes required_attendees_satisfied=true and attendee_ids for traceability.
Batch Suggestions for Multiple Listings with Routing Between Locations
- Given N listings with distinct addresses and a target window, When batch suggestions are requested for a tour of K showings, Then the system proposes sequences of K slots that minimize total travel time subject to constraints. - Then each adjacent pair of slots includes a travel buffer >= routing API ETA + user minimum gap. - Then per-day max showings is not exceeded for the user. - When fewer than K feasible slots exist, Then return the maximal feasible K' < K with per-listing reason codes. - Then results include an ordered itinerary with ETAs, total drive time, and start/end times per listing.
Ranking and Preference Adherence
- Given user work hours, minimum gap, and max daily showings preferences, When suggestions are generated, Then all returned slots comply with the max daily limit and include a gap >= the minimum. - Then suggestions are sorted by a score in descending order; the score increases when travel time decreases (all else equal) and when slots fall within preferred work hours (all else equal). - Then ties are broken by earlier start time. - Then each suggestion includes score (0–100), within_work_hours flag, travel_minutes, and reason annotations used in ranking.
API and UI Parity and Performance
- Given identical inputs (user, listings, date range, preferences), When suggestions are requested via UI and via API, Then both return the same ordered set of slots and identical metadata fields. - Then the API responds within 1.5 seconds P95 for single-listing queries and 3.0 seconds P95 for batch (K<=5) under nominal load. - Then the API response schema includes fields: start_iso, end_iso, listing_id, attendee_ids, buffer_minutes, score, reason_codes, route_summary. - Then pagination is supported and ordering is deterministic across pages.
Fallback Behavior on Incomplete Data or Routing API Failure
- When the routing API errors or times out (>1.0s) for any leg, Then a fallback buffer is applied: max(user_min_gap, default_buffer_minutes=15) and the suggestion is annotated with reason code "routing_fallback" and confidence="low". - When any location data is missing, Then distance is treated as unknown, travel buffer defaults to user_min_gap, and suggestions are allowed only if no other constraints are violated, with reason code "location_unknown". - When external calendar sync is stale (>10 minutes), Then suggestions include a "calendar_stale" warning and auto-booking is disabled until a refresh completes. - Then all fallback conditions are exposed in suggestion metadata and captured in logs for audit.
Real-Time Adaptation and Double-Booking Prevention
- When an external calendar event is added, updated, or deleted, Then suggestions reflect the change within 60 seconds or upon manual refresh, whichever comes first. - When a user attempts to book a suggested slot, Then the system performs a final conflict check against all connected calendars; if a conflict exists, booking is blocked and a refreshed suggestion set is returned with reason code "final_conflict_detected". - When a slot is booked in TourEcho, Then a hold is written to connected calendars within 10 seconds and the booked window is immediately removed from subsequent suggestions. - Concurrent booking attempts for the same slot result in a single success; other attempts receive a "slot_unavailable" error.
Sync Health & Audit Trail
"As an operations lead, I want visibility into calendar sync status and a clear audit trail so that I can diagnose issues quickly and maintain trust with our agents."
Description

Expose a dashboard with per-user sync health (connected, degraded, disconnected), last/next sync timestamps, subscription status, and recent errors. Maintain an audit trail for all create/update/delete events with source attribution (TourEcho vs. External), before/after snapshots, and correlation IDs. Provide retry, reconnect, and resync actions, proactive alerts on failures, and exportable logs for support. Include privacy controls to restrict reading to free/busy only and redact external event details, with user-visible consent records and revocation flows.

Acceptance Criteria
Sync Health Dashboard: Status & Timestamps
Given a connected calendar account with a successful last sync, When the dashboard loads, Then the user’s status shows "Connected", And "Last sync" equals the latest success end time in the user’s timezone, And "Next sync" displays a time later than now. Given the most recent sync completed with partial errors, When the dashboard loads, Then status shows "Degraded", And the Recent Errors panel lists up to the last 5 error codes with timestamps within the past 24h, And the "Retry" action is enabled. Given the account token is invalid or consent is revoked, When the dashboard loads, Then status shows "Disconnected", And the "Reconnect" action is visible, And "Next sync" displays "Not scheduled". Rule: Sync health values refresh within 10 seconds after any sync attempt finishes. Rule: Timestamps are displayed in the user’s selected timezone and in ISO-8601 when exported.
Audit Trail: CRUD Events with Source Attribution
Given an event is created/updated/deleted by TourEcho, When the audit trail is queried, Then a record exists with fields: event_type ∈ {create, update, delete}, source = "TourEcho", provider ∈ {Google, Outlook, iCloud}, item_id, correlation_id, before_snapshot, after_snapshot, actor_user_id, occurred_at. Given an event is created/updated/deleted by an external provider, When the audit trail is queried, Then source = "External" and provider is populated accordingly with the same required fields. Rule: Audit records are immutable, time-ordered to the millisecond, and persisted within 2 seconds of the mutation. Rule: Query supports filters by user_id, time range, event_type, source, provider, and correlation_id; results are paginated and sortable by occurred_at. Rule: Snapshots redact fields per active privacy settings (e.g., titles/descriptions/locations when Free/Busy Only is enabled); deletes use null after_snapshot and preserved before_snapshot with redactions applied.
Retry, Reconnect, and Resync Controls
Given a sync job failed, When the user clicks "Retry", Then a new attempt is enqueued within 5 seconds with the same correlation_id and incremented attempt number, And the dashboard shows status "Retrying" until completion. Given status is "Disconnected", When the user clicks "Reconnect", Then the provider OAuth flow starts, And on success status becomes "Connected" and a next sync is scheduled within 2 minutes, And an audit record of type reconnect is written. Given the user clicks "Resync", When confirmation is accepted, Then a full reconciliation runs (re-read of external events and re-write holds/blocks as needed) without duplicating events, And progress (0–100%) is shown, And a resync audit record is written with correlation_id.
Proactive Alerts on Sync Failures
Given a sync attempt ends with severity = error, When the error is recorded, Then an in-app alert banner is shown within 1 minute including provider, error summary, and next-step CTA, And an email is sent if the user has email alerts enabled. Rule: Repeated failures of the same error class within 30 minutes are deduplicated to a single alert with an incrementing occurrence count. Rule: When a subsequent sync succeeds, the open alert auto-dismisses within 1 minute and status reverts to the current health. Rule: Alerts respect user notification preferences and do not expose redacted content.
Exportable Logs for Support
Given a support user selects a time range and user, When "Export Logs" is clicked, Then a downloadable file is generated within 60 seconds containing sync attempts, health changes, error records, and audit trail entries with correlation_ids. Rule: Export format is selectable as CSV or JSON; file includes a schema header (CSV) or top-level schema_version (JSON). Rule: Sensitive fields are redacted per privacy settings; no full event details are exported when Free/Busy Only is enabled. Rule: The download URL is single-use, requires authentication, and expires within 24 hours; the export action itself is logged in the audit trail.
Privacy Controls: Free/Busy-Only and Redaction
Given the user enables Free/Busy Only, When external calendars are read, Then only time ranges and availability are stored, And titles, descriptions, locations, and attendee details are neither stored nor displayed, And snapshots/exports show "[REDACTED]" for those fields. Given Free/Busy Only is enabled, When conflicts are shown in TourEcho, Then conflicting slots display as "Busy" without details while still preventing double-booking. Given the user toggles Free/Busy Only off, When the user reconsents to expanded scopes, Then subsequent reads may include full details; previously stored redacted data remains redacted.
User Consent Records and Revocation Flows
Given a user connects a calendar, When authorization is granted, Then a consent record is stored with provider, granted scopes, terms_version, timestamp, and IP, And the record is visible in the user’s account page. Given a user revokes consent in TourEcho, When the user confirms, Then provider tokens are revoked, status becomes "Disconnected", next sync is unscheduled, and a revocation audit record with correlation_id is written. Given a user revokes from the provider portal, When TourEcho receives an invalid_grant/401, Then the account status flips to "Disconnected" within 5 minutes, an alert is created, and the user is prompted to reconnect. Rule: Consent history is immutable and exportable; each change is timestamped and attributable to an actor.

Guided Snaps

Step-by-step camera prompts suggest the best angles per room, auto-detect room type, and prevent duplicate shots. Framing overlays and quick tips help buyer agents capture what matters in seconds, producing consistent, high-signal visuals that make AI tagging faster and more accurate for listing teams.

Requirements

Room Type Auto-Detection
"As a buyer agent, I want the app to auto-detect the room type so that I can move quickly without manual tagging and keep my flow uninterrupted."
Description

Implement on-device computer vision to classify room type in real time from camera preview and/or captured photos (e.g., kitchen, bathroom, bedroom, living room, dining, hallway, exterior front/back, garage, laundry, office, balcony, closet). Provide confidence scoring, fallback to manual selection, and lightweight model size for sub-second inference on mid-range devices. Persist detected room_type, confidence, capture_step, and timestamp as metadata bound to the TourEcho session created via QR scan. Surface the detected type to drive downstream prompts and overlays, reducing manual tagging and increasing AI summarization accuracy and speed.

Acceptance Criteria
Real-time Preview Room Type Detection
Given the Guided Snaps camera preview is active on a mid‑range reference device (Android: Pixel 5/Samsung A52; iOS: iPhone 11/SE 2020), When a supported room is in view, Then the first room_type prediction and confidence are displayed within 700 ms of preview start. Given the camera preview remains active, When the scene is relatively stable, Then room_type predictions refresh at least 2 times per second and UI remains responsive (no frame drops >10% vs baseline preview FPS). Given a prediction is produced, Then the predicted label is always one of the supported classes: {kitchen,bathroom,bedroom,living_room,dining,hallway,exterior_front,exterior_back,garage,laundry,office,balcony,closet}. Given preview predictions, When the same label is produced for 3 consecutive predictions, Then the label is considered stable and is used to drive the current capture step until a different label is stable.
Confidence Scoring and Threshold-Driven Behavior
Given a room_type prediction is produced, Then confidence is exposed as a floating-point value in [0.0,1.0] with 2 decimal precision. Given confidence >= 0.70 and the label is stable for ≥500 ms, When no manual override is active, Then the label auto-selects for the current step and is surfaced to the user. Given all class confidences < 0.40 over a 1-second window, Then room_type is treated as unknown and a manual selection prompt is shown. Given confidence < 0.70, Then the UI shows a non-blocking “Choose room type” control without auto-selecting a label.
Manual Fallback and Override of Room Type
Given the user taps manual room type selection, When a label is chosen from the supported list, Then that label becomes the effective room_type with source=manual and confidence set to null for that capture. Given a manual override is active, When new model predictions arrive, Then they do not overwrite the manual selection for the current capture. Given a manual override is active, When the user toggles back to Auto, Then subsequent predictions can again set the effective room_type. Given a manual override changes the room_type, Then downstream overlays and tips update within 200 ms to reflect the overridden type.
Metadata Persistence to TourEcho Session
Given a valid TourEcho session exists from QR scan, When a photo is captured, Then metadata is persisted within 200 ms containing: session_id, asset_id, room_type, confidence, capture_step, timestamp (ISO8601 UTC), and source {auto_preview|auto_photo|manual}. Given no active TourEcho session, When a capture is attempted, Then the app blocks persistence and prompts to scan a QR or re-associate a session. Given metadata was saved, When the app is backgrounded or force-closed and reopened, Then the saved metadata is available and correctly associated to the original session_id. Given a saved asset_id, When queried from the review screen/API, Then the persisted fields match the values at capture time.
Downstream Prompts and Overlays Driven by Room Type
Given an effective room_type (auto or manual) is set, When the user proceeds within Guided Snaps, Then the framing overlay and quick tips correspond to the mapping for that room_type. Given the effective room_type changes (auto reclassification or manual override), Then the overlay and tips update within 200 ms. Given room_type=unknown, Then a generic overlay is shown and the manual selection control is prominently available. Given each supported class, When forced via test harness, Then the correct overlay/tips variant renders without error for that class.
On-Device Model Size and Performance Targets
Given the app bundle includes the CV model, Then the combined on-device model assets (weights + labels + support) are ≤ 20 MB per platform build. Given the user opens the Guided Snaps camera, Then model cold-start (load + first inference) completes within 400 ms on mid-range reference devices (Android: Pixel 5/Samsung A52; iOS: iPhone 11/SE 2020). Given continuous preview is running, Then average per-inference latency is ≤ 150 ms and does not require network connectivity (no outbound requests during inference). Given the feature is exercised for 5 minutes, Then the app does not exceed +150 MB peak RSS over baseline due to the model and maintains stable operation without OS termination.
Post-Capture Photo Classification Reconciliation
Given a photo is captured, When the model runs on the captured image, Then a room_type and confidence are produced within 300 ms and stored with source=auto_photo. Given both preview and photo predictions exist, When the photo prediction has confidence greater by ≥0.10 or a different label, Then the effective room_type for the asset is set to the photo prediction; otherwise retain the preview label. Given the effective room_type is updated post-capture, Then the persisted metadata is updated atomically and the review UI reflects the change without requiring app restart.
Guided Angle Prompts
"As a buyer agent, I want clear prompts for the best angles per room so that my photos are consistent and highlight what matters."
Description

Deliver a step-by-step capture flow that suggests 2–3 recommended angles per detected room type (e.g., doorway wide, key feature close-up, secondary corner). Present concise tips for each angle (height, orientation, what to include/avoid) with haptic and optional voice cues. Adapt prompts based on room size and lighting conditions inferred from sensor inputs. Show progress indicators and allow skipping or reordering. Persist completion state per room within the TourEcho showing session and reset cleanly for new QR sessions. Drive consistency and high-signal visuals that speed AI tagging for listing teams.

Acceptance Criteria
Angle Suggestions per Detected Room Type
Given the camera is in a detected room type (kitchen, bedroom, bathroom, living room, hallway, exterior) When the capture flow starts for that room Then the app displays 2–3 distinct recommended angles for that room type and never fewer than 2 or more than 3 And each angle shows a named label, a framing overlay, and a one-line tip And the first prompt renders within 800 ms of room detection And angle recommendations include doorway wide, key feature close-up, and a secondary corner (or room-appropriate equivalents)
Contextual Tips with Haptic and Voice Cues
Given an angle is active When the subject aligns within the overlay bounds (±8% tolerance on edges) Then a single haptic pulse is fired with latency <100 ms and not repeated more than once every 2 seconds Given Voice Cues setting is ON When the active angle changes or alignment help is triggered Then a concise voice prompt (≤5 seconds) plays and respects device volume; when OFF, no voice plays And each angle tip includes guidance on height, orientation, and include/avoid content in ≤120 characters
Sensor-Adaptive Prompts for Room Size and Lighting
Given ambient light is low (lux <40 or camera ISO >1000 proxy) When prompts are shown Then low-light guidance is appended to the tip and the overlay switches to high-contrast mode Given estimated room breadth is large (width >5 m by sensor inference or AR) When prompts are shown Then the first recommended angle favors a wider FOV/stepped-back position and suggests landscape orientation Given strong glare is detected (specular highlight area >8% of frame) When aligning for capture Then a tip suggests shifting position by ±15° to reduce glare
Duplicate Shot Prevention Within Room
Given a photo was captured for an angle in the current room When the user attempts another capture of the same angle Then the system compares pose (yaw/pitch/roll) and perceptual hash And if similarity ≥92% and pose delta <5° Then a duplicate warning is shown, capture is blocked by default, and an explicit Override action allows saving And on a validation set of 200 image pairs, duplicate detection achieves ≥95% true positive and ≤5% false positive
Progress Indicator with Skip and Reorder
Given a room with N recommended angles (2≤N≤3) When the capture flow is active Then a persistent progress indicator shows completed/total (e.g., 1/3) and updates within 200 ms after capture/skip And Skip advances to the next angle and marks the current as Skipped without blocking completion And the user can select any angle chip to capture out of order; progress tallies correctly regardless of order And reordering of remaining angles via drag-and-drop updates the sequence immediately
Per-Room Completion Persistence and Session Reset
Given a TourEcho showing session started via QR code A When the user completes or skips angles in multiple rooms and later returns to the flow Then each room reflects prior completion/skip state and the next suggested angle accurately And relaunching the app within the same session restores state within 1 second Given a new QR session B is started When entering the capture flow Then all rooms show zero completed angles and no prior state from session A is visible
Angle and Room Metadata Tagging on Capture
Given a photo is saved from an angle When the image is stored Then metadata includes room_type (detected), angle_type, capture_order, orientation, and lighting_condition And metadata is available to the backend/export within 2 seconds on a stable connection or queued offline and synced with 100% fidelity And room type detection exists for 100% of captures and is logged with model confidence
Framing Overlays & Composition Guides
"As a buyer agent, I want visual guides on-screen so that I can frame shots correctly the first time without guesswork."
Description

Provide on-screen composition aids: rule-of-thirds grid, live horizon/level indicator using device gyroscope, and room-type-specific safe-area overlays (e.g., keep vanity, mirror, and flooring in frame for bathrooms). Offer quick toggles and remember user preferences. Ensure overlays maintain 60 FPS preview on modern devices and gracefully degrade on low-end hardware. Integrate with the angle prompts to reflect the current suggested framing. Improve capture quality, reduce re-shoots, and standardize visuals across agents.

Acceptance Criteria
Rule-of-Thirds Grid Toggle and Preference Persistence
- Default: Rule-of-thirds grid is ON on first camera open. - Tapping the Grid toggle shows/hides the grid within 100 ms. - Grid opacity remains between 15% and 25% and never obscures the focus reticle or shutter button. - Grid state persists across app restarts, listing switches, and room changes on the same device. - Toggling other overlays does not change the grid state. - The Grid toggle is accessible with a clear label and is operable via screen readers.
Gyroscope-Based Horizon Level Indicator Accuracy and Responsiveness
- The level indicator is shown only when a gyroscope is available; otherwise it is hidden without blocking capture. - Indicator turns green when device roll is within ±1.0° of level; otherwise it shows tilt direction visually. - Reported deviation accuracy is within ±0.5° compared to the OS sensor reading. - Update rate is ≥60 Hz on modern devices and ≥30 Hz on low-end devices. - Latency from device tilt change to indicator update is ≤100 ms.
Room-Type Safe-Area Overlays Triggered by Auto-Detection and Manual Override
- Upon room auto-detection, the corresponding safe-area overlay appears within 200 ms. - Bathroom overlay guides include vanity, mirror, and visible flooring; kitchen overlay includes countertops, major appliances, and work triangle; bedroom overlay includes bed and two walls for depth. - If auto-detect confidence <0.6, a neutral overlay is used and a non-blocking prompt offers manual room selection. - Manual room-type selection updates the overlay within 200 ms and remains active until changed or a shot is captured. - Overlay opacity is 10%–25% and does not occlude capture controls or angle prompt text.
Performance: 60 FPS on Modern Devices, Graceful Degradation on Low-End Hardware
- On modern devices (iPhone 12+/Pixel 6+), with all overlays enabled, camera preview frame rate is ≥60 FPS for the 95th percentile of frames during a 15-second pan test; minimum never drops below 55 FPS. - Touch input latency remains ≤50 ms during overlay rendering. - On low-end devices (e.g., iPhone 8/Pixel 3), the system auto-reduces overlay complexity to maintain ≥30 FPS for the 95th percentile of frames during the same test. - Degradation never disables capture or causes preview stalls longer than 100 ms. - Performance metrics are logged via an internal profiler flag and exportable for QA verification.
Integration with Angle Prompts for Dynamic Framing Overlays
- When a new angle prompt is issued, overlays rotate/translate to reflect the suggested framing within 200 ms. - If the agent dismisses or completes the prompt, overlays revert to the default room-type configuration immediately. - Overlays maintain at least 8 pt padding from prompt text and core capture UI elements. - Each overlay state change is logged with promptId, roomType, and timestamp for analytics. - If angle prompts are disabled, overlays function in default mode without errors or visual artifacts.
Quick Overlay Toggles and Preference Persistence
- Grid, Level, and Safe-Area overlays each have a one-tap toggle reachable directly from the camera UI (no more than one interaction depth). - Toggle state changes provide immediate visual feedback within 100 ms. - Toggle states persist across app restarts and listing changes on the same device. - Toggling one overlay does not affect the state of the others. - All toggles include accessible labels and hit targets ≥44x44 pt.
Capture Quality, Re-shoot Reduction, and Standardization Outcomes
- In a 2-week beta with ≥15 agents across ≥10 listings, median re-shoots per listing decrease by ≥30% when overlays are enabled versus baseline. - Consistency score on an internal framing rubric (0–100) improves by ≥20 points or reaches ≥80 on average across rooms. - Average AI tagging time per photo decreases by ≥15% compared to baseline. - ≥85% of beta participants rate capture quality as "Improved" or "Much improved" in post-pilot survey.
Smart Duplicate Prevention & Coverage Tracker
"As a buyer agent, I want warnings for duplicate shots and a clear coverage checklist so that I save time and still capture all required angles."
Description

Detect near-duplicate shots within a room by combining visual similarity hashing, device pose/heading, and timestamp proximity. When a likely duplicate is about to be taken, show a lightweight nudge with keep/retake options. Maintain a per-session coverage tracker that marks completed rooms and angles, supports re-shoots when quality is low (e.g., blur/low light detected), and displays remaining items at a glance. Persist the capture map to the TourEcho backend for team visibility. Reduce capture time, storage waste, and dataset noise while ensuring full room coverage.

Acceptance Criteria
Near-Duplicate Nudge Within a Room
Given the user is capturing photos in an active room session with at least one prior saved shot in that room When the newly captured frame has visual similarity score >= 0.92 to any prior room shot AND device pose delta <= 12 degrees AND capture timestamps within 20 seconds Then the app displays a lightweight duplicate nudge with two actions Keep and Retake within 500 ms of capture And selecting Retake discards the new frame (no local save, no upload) and returns to live camera And selecting Keep saves the new frame to the session, updates the coverage map, and proceeds with normal upload flow
Room Coverage Tracker Completion & Remaining Items
Given a room type has been auto-detected and its required angle template loaded for the session When the user saves photos that fill all required angle slots for that room with quality rating not Poor Then the room is marked Completed in the coverage tracker and its remaining count shows 0 And if one or more angle slots are unfilled, the tracker shows the count and labels of remaining angles at a glance in the camera UI And the session-level progress indicator updates to show total rooms completed and total remaining angles
Low-Quality Re-shoot and Replace Flow
Given the user captures a frame and the quality classifier rates it Poor with confidence >= 0.8 Then the app shows a prompt indicating low quality with actions Retake and Keep anyway within 500 ms And selecting Retake discards the frame (no save, no upload) and returns to live camera And selecting Keep anyway saves the frame and marks the angle slot as Needs re-shoot in the coverage tracker Given an angle slot is marked Needs re-shoot When the user captures a new frame rated Acceptable Then selecting Replace previous substitutes the photo in the same angle slot and clears the Needs re-shoot status
Capture Map Persistence with Offline Sync
Given the device is online When a room or angle is saved, replaced, or deleted Then the capture map is persisted to the backend within 3 seconds and returns success And team members viewing the listing see the updated capture map on refresh within 5 seconds Given the device is offline when the capture map changes Then the change is queued locally and visibly marked Syncing And upon reconnection, all queued changes sync within 10 seconds in original order with last-write-wins conflict resolution by server timestamp
Session Resume Shows Accurate Coverage
Given a capture session exists for a listing with a saved coverage map When the user reopens Guided Snaps for that listing Then the app loads the last saved coverage map within 2 seconds and displays completed rooms and angles And the camera overlay highlights only remaining angles for the active room And duplicate detection compares against previously saved room shots to avoid prompting for angles already filled
Performance & Non-Blocking UX for Duplicate Check
Given the camera is active on a supported device class When duplicate detection runs on capture Then average duplicate evaluation latency is <= 150 ms and 95th percentile latency is <= 300 ms And the camera preview maintains >= 24 fps during detection And all nudges and prompts are dismissible and operable with accessibility labels and a minimum touch target of 44 by 44 points
Offline Capture & Auto-Sync
"As a buyer agent, I want to keep capturing even with no signal so that nothing is lost and everything uploads automatically later."
Description

Enable full capture flow without connectivity: cache session details from QR, save images and metadata encrypted at rest, and queue uploads for when the network returns. Show sync status per asset, implement resumable uploads, exponential backoff, and background processing respectful of battery/data saver settings. Guardrail local storage with quotas and user prompts for cleanup. On sync, ensure idempotent server writes and reconcile conflicts. Guarantees reliability in low-signal homes and prevents data loss, keeping Guided Snaps dependable in the field.

Acceptance Criteria
Start Capture from QR with No Connectivity
Given the device has no internet and a valid TourEcho QR code is scanned When the app parses the QR Then a capture session is created offline with listingId, property address, and agent name visible in the header And an Offline banner is displayed Given the session is created offline When the user proceeds to Guided Snaps Then camera prompts and framing overlays are available without network calls Given the session was started offline from QR When connectivity returns Then the session links to the remote listing without creating a duplicate session
Encrypted Offline Save of Photos and Metadata
Given the device is offline When a photo is captured Then the image binary and metadata (timestamp, room type, promptId, deviceId, location if permitted) are written encrypted at rest with keys stored in the OS keystore Given the photo was saved offline When the app is force-closed and the device storage is inspected externally Then the assets are not readable outside the app sandbox Given the user initiates logout before sync When they confirm logout Then unsynced assets remain encrypted and retained for up to 30 days or until manual deletion
Per-Asset Sync Status and Queue Visibility
Given one or more offline-captured assets exist When the device regains connectivity Then each asset shows a status badge: Queued, Uploading (with percentage), Retrying in N seconds, Paused (Data Saver), Synced, or Failed Given an asset is in Failed status When the user taps Retry on that asset Then the upload is re-enqueued immediately and status updates accordingly Given multiple assets are pending When the user taps Sync All Then the queue processes up to 3 concurrent uploads and shows overall and per-asset progress
Resumable Uploads with Exponential Backoff
Given an asset upload has started When connectivity drops mid-transfer Then the next attempt resumes from the last confirmed byte without re-uploading completed parts Given transient failures occur (HTTP 429, 503, or timeouts) When retries are scheduled Then backoff follows 2^n seconds with ±20% jitter, capped at 15 minutes, for up to 7 attempts before marking Failed Given a resumed upload completes When the server responds Then the response contains the same assetId as any prior partial attempts
Background Sync Respects Battery and Data Saver
Given Battery Saver is ON When the app is backgrounded Then sync defers until the device is charging or Battery Saver is OFF unless the user selects Sync Now, which runs a single foreground batch Given Data Saver or a metered network is active When syncing is evaluated Then only metadata under 50 KB is sent and media uploads wait for unmetered or Wi‑Fi unless the user overrides per session Given the app is closed When allowed conditions are met (Wi‑Fi and charging) Then background sync completes within 10 minutes using OS background tasks or a notification prompts the user to reopen if limits prevent completion
Local Storage Quotas and Cleanup Flow
Given offline capture is ongoing When TourEcho storage reaches 80% of the cap (min of 1 GB or 10% of free device storage) Then a non-blocking warning displays used and remaining space Given storage reaches 95% of the cap When the user attempts another capture Then capture is blocked and a cleanup dialog lists assets by size, age, and sync status; synced assets can be deleted immediately; unsynced deletions require explicit confirmation Given the user completes cleanup When free space drops below the 80% threshold Then capture is unblocked and the warning is dismissed
Idempotent Server Writes and Conflict Reconciliation
Given assets were captured offline on one or more devices When syncing occurs Then each asset is sent with a deterministic clientId and content hash, and duplicate submissions return HTTP 200 with the existing assetId and no duplicates are created Given the same asset's metadata is edited on device and on server before sync When syncing occurs Then simple fields use last-write-wins by updatedAt and additive tags are union-merged, with a Needs Review badge applied to the asset Given a retry sends the same request twice When the server handles the requests Then the operation is idempotent and the same resource identifiers are returned without creating duplicates
Auto-Redaction (Faces & Personal Info)
"As a broker-owner, I want sensitive details automatically blurred so that our team stays compliant and sellers feel protected."
Description

Run on-device detection and selective blur for human faces, family photos, documents, addresses, and license plates before upload. Provide adjustable sensitivity and a manual brush to add/remove redaction regions. Store redaction masks as separate layers in metadata for auditability; never upload unredacted originals unless explicitly allowed by policy. Include a per-listing setting for agents to require or disable redaction. Improves privacy, reduces liability in occupied homes, and aligns with brokerage compliance needs.

Acceptance Criteria
On-Device Face Redaction Before Upload
Given the device is offline (Airplane Mode) and a 12MP photo is captured via Guided Snaps When the photo contains one or more human faces Then face detection runs entirely on-device with zero network calls And all detected faces are blurred at the default intensity before any upload attempt And the time from capture to redacted preview is ≤ 800 ms on reference devices And the upload action is disabled until redaction is complete
Personal Info Auto-Detection (Photos, Documents, Addresses, Plates)
Given a standardized test set containing family photos, printed documents, street addresses, and vehicle license plates When redaction runs at Medium sensitivity Then the model achieves ≥ 90% recall on faces and ≥ 85% recall on documents/addresses/plates with ≤ 5% false positives And at High sensitivity, recall improves by ≥ 5 percentage points vs Medium with ≤ 5 percentage points increase in false positives And at Low sensitivity, false positives decrease by ≥ 5 percentage points vs Medium
Adjustable Redaction Sensitivity Controls
Given a per-listing default sensitivity of Medium When the agent sets sensitivity to Low, Medium, or High Then the setting persists for that listing and session and applies to subsequent captures And the redaction preview updates within 300 ms of changing sensitivity And the active sensitivity value is exposed in the image metadata
Manual Brush Add/Remove Redaction Regions
Given a redaction preview is displayed When the agent uses the brush to add or erase a region Then the mask updates in real time and the region is added/removed from blur And undo/redo supports at least 10 steps And brush size is adjustable from 4 px to 64 px with visual cursor feedback And brush edits are recorded in the mask metadata with user, timestamp, and tool action
Redaction Masks Stored as Separate Metadata Layers
Given a redacted image is saved or uploaded When inspecting the file and its sidecar (if used) Then redaction masks are stored as separate layers with type labels (face, photo, document, address, plate), model version, and sensitivity And the original unredacted pixel data is not present in the uploaded asset And masks allow deterministic re-application to the original on-device copy for audit And metadata includes a checksum to detect tampering
Policy Enforcement: Never Upload Unredacted Originals
Given the per-listing policy is Redaction Required When the agent attempts to upload, share, or sync photos Then only redacted images plus mask metadata are transmitted And any attempt to export unredacted images is blocked with a policy message And no unredacted image bytes are written to cloud cache or logs And an audit log entry is created with policy state and action outcome
Per-Listing Redaction Setting (Require or Disable)
Given the agent has permission to edit listing settings When they set Redaction Policy to Require or Disable Then the new policy applies to all future captures for that listing And if set to Disable, the redaction pipeline is skipped and originals are uploaded, with a visible banner indicating 'Redaction Disabled' And policy changes are versioned and time-stamped in listing audit logs And policy changes do not retroactively alter assets already uploaded
Metadata Packaging for Instant AI Tagging
"As a listing agent, I want each photo delivered with rich standardized metadata so that AI tagging and summaries are accurate and available fast."
Description

Attach structured, versioned metadata to each capture: room_type, angle_id, device model, focal length/FOV, orientation, level deviation, exposure/ISO, low-light/blur flags, duplicate hash, redaction_applied, coarse GPS (optional), and session/listing identifiers. Publish events to the TourEcho AI pipeline immediately after capture (or post-sync) with retries and a dead-letter queue. Ensure PII is excluded and follow schema evolution practices. This accelerates downstream AI tagging and summary generation, making results available to listing teams within seconds.

Acceptance Criteria
Metadata Field Completeness at Capture
Given a guided snap photo is captured for a known listing and session When metadata is packaged on-device at capture completion Then the payload includes schema_version, room_type, angle_id, device_model, (focal_length_mm or fov_deg), orientation, level_deviation_deg, exposure_time_ms, iso, low_light_flag, blur_flag, duplicate_hash, redaction_applied, session_id, listing_id, capture_timestamp And coarse_gps is included only if location capture is enabled And all fields conform to types/units: strings for enumerations, integers for *_ms and iso, floats for *_deg and *_mm/deg, booleans for *_flag and redaction_applied And orientation ∈ {"portrait","landscape"} And level_deviation_deg ∈ [-90, 90] And no required field is null or empty And duplicate_hash is a non-empty, lowercase hex string of length ≥ 16
Immediate Publish to AI Pipeline (Online)
Given the device is online and the TourEcho AI ingest endpoint is reachable When a capture completes and metadata is packaged Then an ingest event is transmitted within 1.0 seconds of capture_timestamp at P95 and within 2.5 seconds at P99 And the client receives a 2xx acknowledgement and records delivery_timestamp And the event includes event_id and capture_id that are unique within the session_id And the ingest service can validate the JSON against the current schema without error
Offline Queue and Post-Sync Ordering
Given the device has no network connectivity at capture time When a capture completes Then the metadata event is durably enqueued to local storage within 100 ms and survives app kill/restart and device reboot And no event loss occurs under forced app termination When connectivity is restored Then queued events are published in ascending order of capture_timestamp And a backlog of up to 200 events publishes at P95 within 60 seconds of connectivity restoration And each published event retains its original capture_timestamp
Retry Policy and Dead-Letter Handling
Given a transient publish failure (e.g., timeout, DNS error, HTTP 5xx, or HTTP 429) When sending a metadata event Then the client retries with exponential backoff and jitter for up to 5 attempts over no more than 2 minutes total And for HTTP 429, the client honors Retry-After (up to 60 seconds) Given a permanent failure (HTTP 400/401/403/404/422) or exhaustion of max retries When the event cannot be delivered Then the event is moved to a dead-letter queue with failure_reason, last_response_code, and first/last_attempt_timestamps (no PII) And the DLQ entry can be replayed to the ingest endpoint via the supported replay mechanism
PII Exclusion and Redaction Signaling
Given metadata is prepared for an event When the payload is validated Then it contains no user identifiers (names, emails, phone numbers), no device identifiers (IMEI, serial, advertising ID), and no precise GPS coordinates And coarse_gps, if present, is rounded to ≥ 3 decimal places precision (≈ ≥110 m) and included only when the user has enabled location capture And device_model and OS version are allowed but must not include serial or advertising identifiers And redaction_applied is true if any PII redaction (e.g., face/license-plate blur) was applied to the associated image, otherwise false
Versioned Schema and Compatibility
Given the producer emits events with a semantic schema_version (e.g., 1.0.0) When the consumer receives events with schema_version 1.x where x ≥ 0 Then the consumer processes the event successfully, ignoring unknown fields When the producer adds an optional field in 1.1.0 without removing or renaming existing fields Then a consumer pinned to 1.0.0 parses all known fields without error and drops the unknown field And schema definitions are published at a versioned URL and validated in CI And during a 24-hour soak test, 0% of events are rejected due to unknown fields or minor/patch bumps
End-to-End Tagging Availability SLA
Given the device is online and the ingest pipeline is healthy When a capture completes Then AI tags for the photo are available in the listing team UI within 5 seconds at P95 and 10 seconds at P99 measured from capture_timestamp And when offline at capture, AI tags are available within 5 seconds at P95 of connectivity restoration for that event And each event carries a trace_id that links capture, ingest, processing, and UI display for auditability

Redact Shield

Real-time, on-device redaction masks faces, family photos, documents, and other personally identifying items the moment a picture is taken. Adds a feedback-only watermark to prevent misuse, protecting homeowner privacy and giving brokerages confidence to invite more visual input without compliance risk.

Requirements

On-Device PII Detection & Masking
"As a listing agent, I want the app to automatically hide faces and personal items the moment I take a photo so that I can collect room-level feedback without risking homeowner privacy."
Description

Provides a real-time, on-device computer vision pipeline that detects and masks personally identifying elements (faces, family photos, printed documents/IDs, computer screens, license plates, and street numbers) at capture time. The camera preview renders live masks; upon shutter, the photo is redacted before any disk write or network call. Supports configurable masking styles (strong blur/pixelation) with defaults aligned to brokerage policy. Runs entirely offline to protect privacy and ensure operation in low-connectivity showings. Integrates with TourEcho’s capture and feedback flows so only redacted imagery can be attached to showing feedback. Targets high accuracy (e.g., ≥98% recall on faces with minimal false positives on room fixtures) and low latency (≤150 ms per 12 MP photo on baseline devices). Ensures the unredacted original is never persisted beyond volatile memory and is irrecoverably discarded post-processing.

Acceptance Criteria
Live Preview Masking Responsiveness
Given TourEcho capture view is open and the device is online or offline When a face, family photo, printed document/ID, computer screen, license plate, or street number enters the camera preview Then a mask overlay covers each detected PII element within <=150 ms of first appearance And the overlay remains aligned with IoU >=0.70 against labeled ground truth across test frames And the masked preview renders at >=15 fps on baseline devices And no unmasked frame persists longer than 100 ms after initial detection
No Unredacted Persistence & Memory Zeroization
Given the user presses the shutter to capture a 12 MP photo When redaction completes Then only the redacted image is written to persistent storage And no unredacted pixel data is written to disk, cache, or temp directories (verified via OS-level file audit) And no network calls occur until after redaction has completed And the original pixel buffer is zeroized in memory within <=50 ms post-redaction and is not recoverable via forensic tools And application logs/crash reports do not include raw pixel data, thumbnails, or hashes of the unredacted image
PII Detection Accuracy and Low False Positives
Given a standardized, labeled residential-interior validation set When the on-device model runs inference Then face detection recall >=98% and non-PII fixture false-positive rate <=2% for faces And family photo (framed pictures) recall >=92% And printed documents/IDs recall >=90% And computer screen recall >=95% And license plate recall >=95% And street number recall >=90% And overall false-positive rate on non-PII room fixtures (art without faces, appliances, patterns, lamps) <=2%
End-to-End Photo Redaction Latency (12 MP)
Given the baseline device matrix When capturing 30 consecutive 12 MP photos across mixed scenes Then p95 time from shutter press to redacted image persisted is <=150 ms And p99 time is <=180 ms And there are zero dropped captures or app crashes during the run
Policy-Driven Masking Styles and Defaults
Given the brokerage policy sets default masking style (strong blur or pixelation) and strength When a user opens capture Then the preview masks and saved redacted image use the policy-defined style and strength And if the policy is locked, the user cannot change style/strength in-app And if the device is offline, the last cached policy applies And when a new policy is received, the change takes effect on the next capture without app restart
Offline Operation and Network Isolation
Given the device is in airplane mode with no connectivity When the user performs live preview and captures photos Then detection and masking function fully without degradation And no network sockets are opened and no network I/O is attempted (verified via OS network monitor) And captures taken offline are queued and remain redacted-only when later attached
Feedback Flow Attachment Enforcement
Given a user is submitting showing feedback When attaching a photo captured in-app Then only the redacted version is available for attachment and submission And if a user attempts to import an unredacted gallery image, the app redacts it on-device before allowing attach And if redaction fails or times out, the attachment is blocked and the user is prompted to retake or retry And the Submit action is disabled until all attached images are verified as redacted
Feedback-Only Watermark & Provenance Stamp
"As a broker-owner, I want every feedback photo watermarked and provenance-stamped so that images cannot be repurposed for marketing or taken out of compliance context."
Description

Applies a dual-layer watermark to every redacted image: a visible, adaptive overlay that states “Feedback Only — Not for Marketing,” and an invisible cryptographic provenance stamp that binds the image to listing ID, showing ID, device, and capture time. Watermarking occurs post-redaction and before any storage or upload. Server-side validation rejects images lacking a valid provenance signature or with tamper evidence. The visible overlay adapts to orientation and brightness to remain legible without obscuring room context, and is resilient to common edits (crop, resize, recompress) to deter misuse while preserving feedback value.

Acceptance Criteria
On-Device Sequencing: Redaction → Watermarks → Storage
Given a user captures a photo and on-device redaction completes When the app finalizes the image Then the visible "Feedback Only — Not for Marketing" overlay and the invisible cryptographic provenance stamp are applied before the image is written to any storage or queued for upload And any attempt to save or upload a pre-watermarked/redacted image is blocked with an in-app error And the watermarking step completes in ≤150 ms for a 12 MP image on a reference mid-tier device And the applied watermark does not remove or reveal any redacted content
Visible Overlay: Legibility and Non-Obstruction
Given an image in portrait or landscape orientation across backgrounds ranging from 5% to 95% luminance When the overlay renders Then the text exactly equals "Feedback Only — Not for Marketing" And the overlay maintains a contrast ratio ≥ 4.5:1 against its immediate background And the overlay occupies ≤ 5% of total image pixels and remains fully within image bounds And the overlay does not overlap the central 50% area of the image And on 90°, 180°, and 270° rotations the overlay rotates accordingly and remains legible without clipping
Overlay Resilience to Crop/Resize/Recompress
Given a watermarked image saved by the app When it is cropped by up to 5% from each edge, resized between 0.5× and 2.0×, and recompressed at JPEG quality 60–95 Then the overlay text remains fully intact and legible And OCR on the image returns the exact phrase "Feedback Only — Not for Marketing" with ≥ 99% confidence And no part of the overlay is truncated or removed after the allowed edits
Server Validates Provenance and Binds to Listing/Showing/Device/Time
Given an in-app captured image tagged with listing ID L, showing ID S, device ID D, and capture time T When the image is uploaded Then the server verifies the provenance stamp signature with the platform public key and decodes fields L, S, D, T And L and S match active records authorized for the authenticated user And T is within ±2 seconds of the device capture event time And on success the server responds 201 Created and persists the asset with a verification record And on failure the server responds 4xx with error code "provenance_invalid" and does not persist the asset
Tamper Evidence and Invalid/Missing Provenance Rejection
Given a watermarked image is modified outside the app (pixels edited, metadata altered, or stamp removed) or an image without a provenance stamp is provided When the image is uploaded Then server-side validation detects tamper or missing stamp And the server responds 4xx with error code "provenance_invalid" and logs a security event with reason And the image is not exposed in any downstream UI or export
Export/Share Enforces Watermarked Output Only
Given a user saves or shares a redacted image from the app When the export completes Then only a version with the visible overlay and valid provenance stamp is written or shared And any UI to disable or remove the overlay is not present in the feedback capture flow And attempts to export a pre-watermarked image are blocked with an in-app error
Redaction Review & Manual Controls
"As an agent, I want quick controls to fine-tune what gets masked so that I can submit clear, useful photos while still protecting privacy."
Description

Provides a lightweight review screen and live camera controls that let agents adjust masks without ever persisting unredacted pixels. Agents can add/remove mask regions (tap-to-toggle bounding boxes, brush/erase), tweak mask intensity within policy limits, and preview a safe before/after overlay rendered solely from the redacted buffer. Includes fast undo, clear error states when required categories remain unmasked, and accessible affordances (haptics, large targets, screen reader labels). Integrates with the feedback submission flow to ensure only approved, redacted images advance to upload.

Acceptance Criteria
On-Device Redaction Review Without Unredacted Persistence
Given an agent captures or selects a photo in Redaction Review When the app processes, previews, or saves edits Then no unredacted bitmap is written to persistent storage (0 files, 0 cache entries) And no network request contains unredacted pixels or thumbnails (verified via proxy inspection) And terminating the review screen clears any unredacted RAM buffers within 500 ms And a filesystem scan after exit contains only redacted images and thumbnails
Tap-to-Toggle Bounding Boxes for Mask Add/Remove
Given detector-generated boxes for faces, documents, and family photos are displayed When the agent taps a box once Then the mask for that region toggles on within 100 ms and a light haptic is emitted And the masked region opacity is set to the policy minimum or higher When the agent taps the same box again Then the mask attempts to toggle off And if the category is required, the toggle-off is blocked with inline guidance; otherwise it removes within 100 ms And tap hit targets are at least 44x44 pt and support screen-reader focus And mask alignment overlaps the box with IoU ≥ 0.90 and spillover beyond the box edges is ≤ 2 px
Brush and Erase Tools with Fast Undo
Given the agent enters Brush/Erase mode When the agent draws a stroke Then the mask updates at ≥ 30 FPS with input-to-render latency ≤ 50 ms And brush width is adjustable between 8 and 64 px in 1 px increments When the agent taps Undo Then the last stroke reverts within 100 ms and Redo becomes available And at least 20 undo steps are retained per image without data loss And no undo/redo action ever reveals pixels beyond the pre-masked state
Mask Intensity Tweaks Within Policy Limits
Given a masked region is selected When the agent adjusts the intensity slider Then opacity is clamped between policy min 70% and max 100% And adjustments occur in 5% increments with a haptic tick per step And attempts to go below the policy minimum are prevented with a non-blocking note And the exported image opacity matches the selected value within ±2%
Safe Before/After Overlay Preview from Redacted Buffer
Given the agent toggles the Before/After preview When the overlay is displayed Then the "before" visualization is rendered solely from the redacted buffer (no access to original image buffer) And the preview never exposes unredacted pixels (0% reveal verified by visual diff against the redacted output) And entering/exiting preview does not alter current masks or intensity settings And performance meets overlay render time ≤ 200 ms on target devices
Required Category Validation and Error States
Given faces, documents, and family photos are configured as required categories When the agent attempts to submit or remove a mask that would leave a required item unmasked Then validation blocks the action and identifies remaining items by category and count And a "Review remaining" control moves focus to the next unmasked required region And the Submit button remains disabled until requirements are met or a policy override is confirmed And screen readers announce the error with role=alert and provide actionable labels
Submission Flow Only Uploads Approved Redacted Images
Given the agent taps Submit in the feedback flow When the client assembles the upload payload Then only approved, redacted images are included and originals are excluded And any image edited in review carries its current masks and intensity settings in the exported file And attempts to bypass review are blocked with an explanatory toast and return the user to the review screen And network inspection confirms no unredacted bytes are transmitted
Policy Enforcement & Admin Controls
"As a compliance admin, I want to enforce redaction and watermark policies across our agents so that our brokerage meets privacy obligations consistently."
Description

Adds brokerage-level policy management to define and enforce Redact Shield behavior: always-on redaction, mandatory watermarking, minimum mask strength, unskippable review, and PII categories to detect. Policies are versioned, distributed to enrolled devices, and enforced such that capture or upload of unredacted imagery is impossible when a policy is locked. Generates immutable audit events (user, device, model version, policy version, timestamps) attached to each image and exposes compliance exports for brokers. Integrates with TourEcho org settings and roles, enabling per-office overrides while maintaining global defaults.

Acceptance Criteria
Enforce Always-On Redaction and Mandatory Watermark on Capture
Given a locked brokerage policy with AlwaysOnRedaction=true and WatermarkMode="FeedbackOnly", When an agent captures a photo in-app or imports an image, Then all enabled PII categories are redacted on-device prior to save or upload, And a "Feedback Only — Not for Marketing" watermark is burned into the image, And redaction and watermark toggles are hidden or disabled for the user, And noncompliant images cannot be saved locally or placed in the upload queue.
Role-Based Policy Management and Per-Office Overrides
Given TourEcho roles, When an Org Admin opens Redact Shield policies, Then they can create/edit global defaults and lock policies. Given TourEcho roles, When an Office Manager opens policy settings for their office, Then they can create/edit office-level overrides for permitted fields without changing global defaults. Given role permissions, When an Agent accesses camera settings, Then policy controls are view-only and cannot be modified. Given global defaults and an office override, When a device in that office syncs, Then the most specific effective policy (office override falling back to global defaults) is applied.
Policy Versioning, Distribution, and Fail-Safe Enforcement
Given a new policy version V2 is published and effective immediately, When an enrolled device is online, Then it downloads and applies V2 within 5 minutes and displays policyVersion=V2 in settings. Given policy lock=true, When a device has no valid policy cached, Then capture and upload are disabled until a policy is received. Given policy lock=true and device has V1 cached, When V2 is published, Then the device continues enforcing V1 until V2 is received, and capture/upload of noncompliant imagery remains blocked. Given server-side enforcement, When a client attempts to upload an image whose attached policyVersion != current effective version for its office, Then the server rejects the upload with HTTP 412 Precondition Failed and logs an audit event.
Minimum Mask Strength and Unskippable Review
Given MinMaskStrength=0.85 and ReviewMode="Unskippable", When redaction is applied, Then each detected PII region must have mask coverage >= 0.85 before save/upload becomes available. Given ReviewMode="Unskippable", When the review screen is shown, Then the Continue/Upload controls remain disabled until the user reviews each detected region and confirms. Given MinMaskStrength is not met, When the user attempts to proceed, Then the system blocks progression and surfaces a single path to adjust redaction or discard the image.
Configurable PII Categories Detection
Given a policy enabling PII categories {Faces, FamilyPhotos, Documents, LicensePlates} and disabling {Screens}, When an image is captured, Then detections run only for enabled categories and redactions are applied accordingly. Given a category is disabled in the effective policy, When such PII appears in an image, Then it neither blocks capture/upload nor appears in the audit event’s detections list. Given a policy change to categories, When a device next syncs, Then the change takes effect on the next capture without requiring an app restart.
Immutable Audit Events Attached to Each Image
Given an image is captured under an effective policy, When the image is saved, Then an immutable audit event is created with fields {imageId, userId, deviceId, appVersion, modelVersion, policyVersion, captureTimestamp, watermarkApplied, minMaskStrength, detectionsSummary}. Given append-only storage, When an audit event is written, Then it is stored with a cryptographic digest and previousDigest to form a tamper-evident chain and cannot be updated via any API. Given the image is uploaded, When the server receives it, Then it verifies the audit event digest and policyVersion match, and persists both; uploads without a valid audit event are rejected.
Broker Compliance Export
Given a Broker Admin or Compliance role, When they request a compliance export for a date range and office selection, Then the system generates a CSV and JSON including per-image audit fields {imageId, officeId, userId, deviceId, appVersion, modelVersion, policyVersion, captureTimestamp, watermarkApplied, minMaskStrength, detectionsSummary} and a file-level signature. Given role-based access, When a non-privileged user requests an export, Then access is denied with HTTP 403. Given export generation, When the export completes, Then it is available for download within 2 minutes for up to 50k records and includes a manifest of row counts per office.
Offline Performance & Device Compatibility
"As an agent touring homes with poor reception, I want redaction to work instantly and offline so that I’m not blocked from submitting feedback on-site."
Description

Optimizes models and processing pipeline to run fully offline with predictable performance across a defined device/OS matrix. Implements dynamic model selection and quantization to maintain sub-150 ms redaction time for 12 MP photos on baseline hardware, with graceful degradation (reduced preview FPS, staged capture) under thermal or memory pressure. Includes a compatibility service that gates the feature per device/OS, surfaces clear messaging if a device cannot meet policy thresholds, and allows remote configuration of performance thresholds via feature flags. Ensures consistent behavior during long showing days with battery and thermal safeguards.

Acceptance Criteria
Baseline Offline Redaction Performance (12 MP, Sub-150 ms)
Given the device is marked Supported in the compatibility matrix for Redact Shield And the device is offline And the camera captures 12 MP images When the user captures 100 photos in a session after a 5-run warm-up Then on-device redaction completes with p95 latency ≤ 150 ms per photo on each device in the Supported matrix And the max single-photo latency ≤ 200 ms And incremental peak RSS during redaction ≤ 250 MB over app idle baseline And there are 0 redaction crashes or timeouts And no network requests are made during capture or redaction And metrics for latency, memory, and success are logged locally
Dynamic Model Selection & Quantization Under Resource Pressure
Given free RAM falls below the configured threshold or thermal state ≥ "serious" When Redact Shield initializes or detects resource pressure mid-session Then it selects the smallest compatible model/quantization per policy within 10 ms And the app remains responsive (main-thread frame time ≤ 16 ms p95 during selection) And the selection event is logged with reason, device tier, and model identifier And the selected model meets the device-tier latency budget (p95 ≤ target) over a 20-photo probe And if no model can meet the budget, degrade_mode is set to true within 50 ms
Graceful Degradation: Preview FPS Reduction & Staged Capture
Given degrade_mode is true due to thermal or memory pressure When the user opens the camera preview or captures photos Then the preview frame rate reduces to the configured 15–24 FPS range And frame-time variance stays ≤ 100 ms at p95 And capture is staged with a minimum 300 ms pre-capture buffer and single-frame queue And a non-blocking notice "Optimizing for device performance" displays ≤ 3 seconds, no more than once per minute And when pressure clears for 60 seconds, normal mode is restored with ≥ 60-second hysteresis to prevent oscillation And no photo frames are dropped or corrupted during capture
Compatibility Gate and Clear Messaging (Device/OS Matrix)
Given the device/OS is not in the Supported matrix or fails runtime capability checks When the user navigates to enable Redact Shield or opens the camera Then the feature is gated before preview starts And a clear message appears within 300 ms including a reason code and a "Learn more" CTA And no redaction model is loaded and no partial preview is shown And analytics record gate_reason and device signature And on Supported devices, the gate permits activation with no blocking message
Remote Configuration of Performance Thresholds (Feature Flags)
Given new feature flags adjusting latency and memory thresholds are available When the app starts or the 6-hour polling interval elapses Then the new configuration is atomically applied within 2 seconds without app restart And the previous configuration is cached and used when offline with a TTL ≥ 72 hours And if the update makes the device unsupported, the gate state and messaging update within 5 seconds And a rollback flag restores the prior config within 2 seconds of receipt And all modules read a consistent, versioned view of thresholds
Long Showing Day Stability: Thermal/Battery Safeguards
Given a 3-hour session with a 12 MP capture every 30 seconds and intermittent preview When operating on Supported devices starting at ≥ 60% battery Then redaction p95 latency remains within the device-tier target throughout the session And the app does not trigger an OS force-close or thermal shutdown And when battery ≤ 20% or thermal state ≥ "serious", safeguards engage within 1 second: degrade_mode=true, reduced FPS, and capture pacing ≥ 2 seconds between captures And total battery drain attributed to the app ≤ 25% over the session on baseline devices And all captures are redacted and persisted locally without loss
Strict Offline Operation & Privacy Compliance
Given network connectivity is disabled or firewalled When using Redact Shield to preview and capture 50 photos Then all functionality operates without network access And zero outbound network requests are made (verified via OS instrumentation) And no image bytes or model telemetry leave the device And redaction artifacts and logs are stored only locally and purged per retention policy
Secure Storage & Transmission
"As a homeowner, I want assurance that unredacted images never leave the device so that my family’s privacy is protected even if something goes wrong."
Description

Ensures secure handling of media end-to-end: process frames in-memory; persist only the redacted, watermarked image; purge originals from RAM immediately after processing; strip EXIF GPS and facial tags while retaining non-identifying listing/showing references; encrypt local caches using the OS keystore; upload via TLS 1.3 with signed URLs and integrity checks; and automatically delete local copies after confirmed delivery. Integrates with TourEcho’s media service so downstream consumers (feedback summaries, share links) only access compliant images. Provides robust retry and background transfer with audit trails for reliability and compliance.

Acceptance Criteria
On-Device In-Memory Processing and RAM Purge
- Given a user captures a photo, When redaction processing runs, Then the original frame is kept only in RAM and no unredacted file is created in any storage directory. - Given processing completes, When memory is inspected via instrumentation, Then all original frame buffers are released within 100 ms and are not accessible to any app API or OS media index. - Given OS file I/O is monitored during capture, Then zero writes of unredacted content occur before, during, or after processing.
Persist Only Redacted, Watermarked Image with Sanitized Metadata
- Given a capture is persisted, Then exactly one file is stored and it is the redacted image containing the feedback-only watermark. - When the stored image’s metadata is inspected, Then EXIF GPS tags and facial/people tags are absent, And non-identifying listing_id and showing_id fields are present. - When searching storage, media index, and app APIs for an unredacted or unwatermarked variant, Then no results are found.
Encrypted Local Cache via OS Keystore
- Given a redacted image is cached locally, Then the file at rest is encrypted with a key protected by the device OS keystore and is unreadable outside the app context. - When the cached file is copied via filesystem tools, Then its contents are ciphertext and cannot be decrypted without the app. - When the app reads the cached file, Then decryption succeeds only after integrity/authentication is verified.
Secure Upload: TLS 1.3, Signed URLs, Integrity Verification
- Given an upload starts, Then the connection negotiates TLS 1.3; attempts to use lower TLS versions fail. - When the client requests an upload target, Then a time-bound, single-use signed URL is used for exactly one object. - When the upload completes, Then a cryptographic checksum provided by the client matches the server-computed value before delivery is marked successful. - When certificate validation fails, Then the upload is aborted and no data is transmitted.
Reliable Background Transfer with Retries and Backoff
- Given the app is backgrounded or the screen is locked, When an upload is in progress, Then the transfer continues within OS background limits. - When transient errors occur (network timeout, 5xx), Then the client retries with exponential backoff up to 5 attempts before surfacing a failure. - When connectivity is restored or the device reboots, Then the upload resumes without re-sending bytes already confirmed by the server. - Then all upload attempts, statuses, and errors are recorded with timestamps and request IDs in an immutable audit trail accessible to admins.
Post-Delivery Automatic Local Deletion
- Given the server acknowledges receipt and integrity verification, Then the local cached copy is deleted within 10 seconds. - Then the deleted file is removed from any app cache indexes and is not returned by file pickers, APIs, or OS media indices. - Given delivery is not yet confirmed, Then the local copy is retained for retry and is not deleted.
Downstream Access Compliance via Media Service Integration
- Given downstream services request media, Then the media service provides only the redacted, watermarked images produced by Redact Shield. - When feedback summaries and share links render images, Then they are loaded exclusively from compliant media endpoints and display the watermark. - When any endpoint is queried for originals or unwatermarked images, Then the request returns 404/403 and is logged. - When downstream consumers read metadata, Then only non-identifying listing/showing references are present; GPS and facial tags are absent.

Issue Heatmap

AI aggregates tags across all visits into a room-by-room heatmap of pain points (e.g., low light, dated flooring, odor). Impact-weighted visuals show which rooms and issues are costing the most interest, so agents focus fixes where they will move the needle fastest.

Requirements

Tag Taxonomy & Normalization
"As a listing agent, I want feedback normalized into consistent issue tags so that I can accurately compare patterns across many showings without manual cleanup."
Description

Implement a centralized, extensible tag taxonomy that standardizes buyer feedback into canonical issue labels (e.g., “low light,” “dated flooring,” “odor”) across all visits and languages. Include synonym mapping, spelling tolerance, and multilingual normalization to ensure consistent aggregation. Provide admin tools to merge/split tags and maintain version history so past data re-maps safely. Integrate with TourEcho’s ingestion pipeline to process QR-sourced comments in real time, outputting normalized tags with metadata (confidence, language, source visit, timestamp) for downstream analytics.

Acceptance Criteria
Real-time QR Comment Ingestion Normalization
Given a QR-sourced comment with visit_id and timestamp is received by the ingestion endpoint When the normalization pipeline processes the payload Then a normalized_tags payload is produced within 1000 ms at p95 latency And each emitted tag includes canonical_label, confidence (0.0–1.0), detected_language (ISO 639-1), source_visit_id, source_timestamp And duplicate submissions of the identical payload within 5 minutes do not create duplicate analytics events (exactly-once delivery) And if no issue is detected, an empty normalized_tags array is emitted with status "no_match" and processing completes successfully
Synonym and Spelling Tolerance Mapping
Given a curated test corpus of 1000 variant phrases and common misspellings mapped to 50 canonical issue tags When the corpus is processed through the normalization service Then macro-averaged F1 for mapping to the correct canonical labels is >= 0.96 And within a single comment, multiple variants for the same concept result in exactly one instance of the canonical tag And no more than 2% of outputs include an extraneous (incorrect) canonical tag per comment
Multilingual Normalization (EN/ES/FR)
Given a labeled multilingual corpus (English, Spanish, French) for the supported issue tags When comments are processed Then language detection accuracy is >= 97% on the corpus And canonical tag mapping achieves F1 >= 0.95 for each supported language And mixed-language comments are processed without error, with detected_language set to the primary language of the matched phrase(s) And for unsupported languages, processing returns detected_language="und" and produces no tags rather than failing
Admin Merge of Canonical Tags with Safe Re-map
Given an admin requests to merge Tag B into Tag A in the admin console When the merge is confirmed Then a new taxonomy_version is created and activated immediately for new ingestions And 100% of historical events previously labeled Tag B are re-mapped to Tag A within 2 hours of merge start And post-re-map analytics counts for Tag A equal the prior A+B totals within ±0.1% And an audit log records actor, timestamp, before/after mapping, and rationale
Admin Split of Canonical Tag with Historical Reprocessing
Given an admin splits Tag C into Tag C1 and Tag C2 and initiates historical reprocessing When the split job runs Then a new taxonomy_version is created and activated for new ingestions And historical events previously labeled Tag C are reclassified into C1/C2 by reprocessing original raw comments And at least 95% of previously Tag C events are reassigned to C1 or C2; the remainder are flagged_unassigned with an export for manual review And reprocessing completes within 24 hours for a dataset of 1,000,000 events And an audit log captures actor, timestamp, and reclassification summary counts
Version History and Query Consistency
Given analytics queries request results under a specified taxonomy_version (latest or a prior version) When the same date range is queried across two versions that differ by a known merge/split Then totals reconcile as expected (e.g., A+B under version N equals A' under version N+1 after a merge) within ±0.1% And every normalized tag record stores taxonomy_version_id, enabling reconstruction of historical views And a version history endpoint lists all versions with change metadata (actor, timestamp, change type)
Downstream Integration Contract for Issue Heatmap
Given normalized tag events are published to the analytics stream consumed by the Issue Heatmap service When the service ingests events Then the payload schema matches the published contract (canonical_label:string, confidence:number, detected_language:string, source_visit_id:string, source_timestamp:ISO-8601, taxonomy_version_id:int) And contract tests in CI block incompatible schema changes And events are deduplicated per source_visit_id and processed in order per visit And an end-to-end test over 100 synthetic visits produces heatmap counts that match expected tag aggregates exactly
Room Attribution & Confidence Scoring
"As a broker-owner, I want issues linked to the exact rooms they affect so that I can direct sellers to focus on the spaces that most influence buyer interest."
Description

Build logic to attribute each normalized tag to a specific room or zone using structured inputs (selected room on QR form), NLP from free-text mentions, and listing metadata (room names, floorplan). Support multiple-room attribution when appropriate and compute a confidence score per assignment. Handle aliases (e.g., “primary bedroom” vs. “master”) and ambiguous references with fallback to “whole home.” Expose attribution and confidence as fields for visualization and filtering.

Acceptance Criteria
Attribution from Structured Room Selection
Given a tag submission where the QR form room selection is "Kitchen" and the normalized tag is "dated flooring" When the attribution engine processes the submission Then the tag is attributed to the room with id matching Kitchen from the listing metadata And attribution.scope is "room" And attribution.rooms contains exactly one entry with name "Kitchen" and confidence >= 0.90 and <= 1.00 And attribution.methods includes "form" And the stored record exposes attribution.rooms[0].id, attribution.rooms[0].name, attribution.rooms[0].confidence as retrievable fields
Alias and Synonym Resolution in NLP
Given a tag submission with no room selected on the form and free text "master feels dark in the afternoon" And the listing metadata contains a room with name "Primary Bedroom" (id: RB1) and aliases include ["master bedroom", "owner's suite"] When the attribution engine processes the submission Then the tag is attributed to room id RB1 ("Primary Bedroom") And attribution.scope is "room" And attribution.rooms has one entry for "Primary Bedroom" with confidence >= 0.80 And attribution.methods includes "nlp" and "metadata" And the output does not create a separate room for the alias term (no duplicate rooms)
Multi-Room Attribution with Confidence Distribution
Given a tag submission where the QR form room selection is "Kitchen" and free text states "kitchen and dining are cramped" And the listing metadata contains rooms "Kitchen" (id: K1) and "Dining Room" (id: D1) When the attribution engine processes the submission Then attribution.scope is "multi_room" And attribution.rooms contains entries for K1 and D1 only And each listed room has a confidence > 0 and <= 1.00 And the sum of all room confidences equals 1.00 +/- 0.01 And the confidence for K1 (form-selected room) is greater than or equal to the confidence for D1 And attribution.methods includes both "form" and "nlp"
Ambiguity Handling and Whole-Home Fallback
Given a tag submission with no room selected and free text "overall lighting is poor" (no specific room terms detected) When the attribution engine processes the submission Then attribution.scope is "whole_home" And attribution.rooms contains exactly one entry with id "whole_home", name "Whole Home", and confidence = 1.00 And attribution.methods includes "nlp" And no specific room ids from the listing appear in attribution.rooms
Use of Listing Metadata for Disambiguation
Given a tag submission with no room selected and free text "living area carpet is dated" And the listing metadata has rooms ["Living Room" (id: LR1)] only When processed Then attribution.scope is "room" And attribution.rooms contains exactly one entry for LR1 with confidence = 1.00 Given a tag submission with no room selected and free text "living area carpet is dated" And the listing metadata has rooms ["Living Room" (id: LR1), "Family Room" (id: FR1)] When processed Then attribution.scope is "multi_room" And attribution.rooms contains entries for LR1 and FR1 only And each confidence is > 0 and the sum equals 1.00 +/- 0.01 And the chosen rooms are derived from a metadata-backed synonym map that links "living area" to both candidates
Exposure of Attribution Fields for Heatmap & Filtering
Given a stored tag attribution result When retrieved via the API Then the response includes fields: attribution.scope ∈ {"room","multi_room","whole_home"}, attribution.rooms[] (each with id, name, confidence as a float in [0,1] rounded to two decimals), and attribution.methods[] ∈ {"form","nlp","metadata"} And all required fields are non-null Given multiple attributed tags exist When the client requests tags filtered by room_id = "K1" and min_confidence = 0.70 Then only tags where attribution.rooms contains an entry with id "K1" and confidence >= 0.70 are returned And tags not meeting the threshold are excluded
Impact Weighting Engine
"As a listing agent, I want impact-weighted scores of issues so that I can prioritize fixes that will most increase buyer interest."
Description

Create an engine that calculates an impact score per room-issue pair by combining frequency, sentiment intensity, recency, and buyer intent signals (e.g., saved interest, second visit indicator). Support configurable weights at the organization and listing level and apply decay functions over time to emphasize recent showings. Output normalized 0–100 impact scores and rank-ordered lists for downstream UI and reporting.

Acceptance Criteria
Score Normalization and Determinism
Given an input set of room–issue feedback with valid frequency, sentiment, recency, and buyer-intent signals When the engine computes impact scores Then every returned score is a numeric value between 0 and 100 inclusive, rounded to one decimal place And no score is null, NaN, or Infinity And running the computation twice with identical inputs and weights yields identical scores and ranks And increasing any single component while others remain constant does not decrease the score
Weight Configuration and Override Precedence
Given organization-level default weights and listing-level override weights When both are defined Then the listing-level weights take precedence for that listing while unspecified weights fall back to organization defaults And the effective weights are validated to sum to 1.0 ± 0.01; otherwise the engine rejects the configuration with a clear error And after a weight change is saved, the next computation for that listing uses the new weights within 60 seconds And reverting listing-level weights restores organization defaults for that listing
Recency Decay with Configurable Half-Life
Given a configured exponential decay with half-life H days And two otherwise identical visit events separated by H days When computing their contributions Then the newer event contributes exactly 2× the weight of the older event And an event 2H days older contributes 25% of the weight of a new event And changing H to H' updates contributions accordingly on the next compute And disabling decay (H = ∞) yields equal contributions regardless of age
Buyer Intent Signal Integration and Missing-Data Handling
Given buyer intent signals (e.g., saved_interest flag, second_visit indicator) present in the input When these signals are true for a visit Then the per-visit contribution is increased according to the configured intent weights and factors within ±0.1 of the expected value And when any intent signal is missing or null Then the engine treats it as neutral (no uplift) without failing the computation And with all else equal, a visit with intent signals set to true yields a strictly higher impact contribution than one with them set to false
Frequency and Sentiment Monotonicity
Given two room–issue pairs with identical inputs except frequency When one has double the frequency of tagged occurrences Then its impact score is higher by an amount consistent with the configured frequency weight within ±0.1 And given two otherwise identical inputs with sentiment intensity s2 > s1 Then the pair with s2 yields a strictly higher score, proportional to the sentiment weight within ±0.1 And when frequency is zero across all visits Then the resulting score is 0 regardless of other components
Rank-Ordered Output and Tie-Breaking
Given a listing with multiple room–issue pairs When the engine outputs results Then the list is sorted in descending order by impact score And ties are broken deterministically by higher recency-weighted component, then by room name ascending, then issue tag ascending And the output includes for each pair: room, issue tag, normalized score, rank position, last_updated timestamp And requesting top K returns exactly K items if available and preserves the same relative order as the full ranking
Incremental Update and Performance SLAs
Given a new showing feedback event for a listing with up to 500 room–issue pairs When the engine ingests the event Then impacted scores and the ranking are recomputed and available within 10 seconds end-to-end And P95 compute latency for a full recompute of 500 pairs is ≤ 500 ms And a batch recompute for a listing with 5,000 visits completes in ≤ 2 minutes And computations are idempotent such that reprocessing the same event does not change outputs
Interactive Heatmap Visualization
"As an agent on the go, I want a clear visual heatmap of room-level issues so that I can instantly see where a property is losing buyer interest."
Description

Deliver a responsive UI component that renders rooms as tiles or on an optional floorplan overlay, colored by impact score with accessible gradients. Provide tooltips that show top issues, counts, and trend arrows; allow toggling between impact, frequency, and sentiment views. Ensure mobile and desktop parity, loading states, and empty-state guidance. Integrate with TourEcho’s listing page, reading from the analytics API and updating as new data arrives.

Acceptance Criteria
Color-Coded Room Tiles and Floorplan Overlay
Given a listing with per-room impact scores (0–100) and an optional floorplan, When the heatmap renders, Then each room tile/overlay is colored by a continuous, accessible gradient and a legend shows min/median/max values. Given high-contrast mode or color-vision deficiencies, When the gradient is applied, Then tiles maintain ≥3:1 contrast against background and an optional pattern overlay can be toggled on. Given a floorplan is available, When the user switches to overlay mode, Then each room polygon aligns to its data within ±8px of its intended position and no overlays overlap. Given no floorplan is available, When the component loads, Then a tiled layout is displayed without errors and with the same color mapping. Given 30 rooms of data, When the component renders, Then initial render completes in ≤2 seconds on a mid-tier device.
View Toggle: Impact vs. Frequency vs. Sentiment
Given the default view is Impact, When the user selects Frequency or Sentiment, Then colors, legend labels/units, sort order (if applied), and tooltips update within 200 ms to reflect the selected metric. Given a user changes the view, When they navigate within the listing page and return, Then the last-selected view persists for the session. Given keyboard-only interaction, When focus is on the toggle group, Then Left/Right arrows change selection and Enter/Space activates the focused option, with the active option visibly indicated and announced to screen readers. Given the user switches views, When a tooltip is open, Then the tooltip content updates to match the selected view without closing.
Tooltip Details with Issues, Counts, and Trends
Given a user hovers (desktop) or taps (mobile) a room, When the tooltip opens, Then it shows room name, top 3 issue tags for the current view, each tag’s count, and a 7-day trend arrow (up/down/flat) with percentage delta. Given the viewport edge proximity, When the tooltip would overflow, Then it repositions automatically to remain fully visible. Given mobile viewport <768 px, When a user taps a room, Then details appear as a bottom sheet with a clear close control and swipe-to-dismiss. Given keyboard focus on a room, When Enter/Space is pressed, Then the tooltip opens; and pressing Escape closes it; moving focus away closes it. Given network updates arrive while a tooltip is open, When the underlying room data changes, Then the tooltip updates its values and shows a brief “Updated just now” indicator.
Real-Time Data Updates from Analytics API
Given the listing page provides listingId and auth context, When the component mounts, Then it requests the analytics for that listing and renders the returned data. Given a live connection (WebSocket or 30 s polling), When new analytics data arrives, Then the heatmap updates within 2 seconds without full re-render of the page and preserves the user’s scroll/selection state. Given an update occurs, When values change, Then a subtle refresh indication appears for ≤800 ms on changed rooms only. Given an update attempt fails, When the network/API error occurs, Then the component retries with exponential backoff up to 5 attempts and logs a non-blocking error event. Given the data is older than 15 minutes, When the component renders, Then a “Last updated <time>” timestamp is visible in the header.
Responsive Mobile and Desktop Parity
Given viewport ≥1200 px, When the heatmap renders, Then the grid displays 4–6 columns (auto-fit) with tiles ≥140×120 px; at 768–1199 px it displays 3 columns; at <768 px it displays 2 columns, maintaining legible labels. Given a touch device, When using the floorplan overlay, Then pinch-to-zoom (up to 3×) and pan within bounds are supported at ≥50 FPS on mid‑tier devices, with hit targets ≥44×44 px. Given devicePixelRatio ≥2, When tiles and overlays render, Then vector/SVG assets are used to avoid blurriness. Given orientation change, When rotating the device, Then layout reflows without visual glitches and preserves current view/selection.
Loading, Empty, and Error States
Given the initial analytics request is in flight, When the component mounts, Then skeleton tiles and a disabled toggle group display until data resolves or fails. Given the API returns 200 with no room analytics, When the component renders, Then an empty state appears with guidance text and a link to “How to collect feedback,” and no error styling is shown. Given a 4xx/5xx API response, When the request fails, Then an inline error banner with a Retry action appears; retrying re-attempts the request and hides the banner on success. Given partial data (some rooms missing), When rendering the heatmap, Then available rooms render normally and a notice shows the count of rooms without data.
Accessibility: Keyboard, Screen Readers, and Contrast
Given keyboard-only usage, When navigating the component, Then the toggle group, each room tile/overlay, legend, and close controls are reachable in logical tab order; Enter/Space activates; Escape closes overlays. Given a screen reader, When focus lands on a room, Then an aria-label announces the room name and current metric value (with unit and view), and toggles announce state via aria-pressed. Given color accessibility requirements, When default styles are applied, Then tooltip text has ≥4.5:1 contrast, tiles have ≥3:1 contrast against background, and the palette is colorblind-safe or can be augmented with patterns. Given focus changes, When components appear/disappear (e.g., bottom sheet), Then focus is trapped within and returned to the invoking control on close.
Drill-Down & Advanced Filtering
"As a team lead, I want to drill into the comments behind a hot spot and filter by time and buyer type so that I can validate the signal and coach my agents."
Description

Enable click-through from any heatmap cell to a detailed panel with raw comments, timestamps, visit segments (first-time vs. repeat), and buyer type. Provide filters for date range, issue tag, sentiment, tour type, and agent team. Include search, pagination, and export of the filtered detail. Maintain filter state across navigation and support shareable, permissioned URLs.

Acceptance Criteria
Heatmap Cell Drill-Down Panel
Given a user is viewing the Issue Heatmap for a listing When the user clicks a heatmap cell Then a detail panel opens within 500 ms and reflects the selected room and issue tag And the panel lists visit records showing raw comments, timestamps (listing local timezone), visit segment (first-time or repeat), and buyer type And the default filters are pre-set to the clicked room and issue tag And if no records match, an empty state message displays without errors And closing the panel returns focus to the originating heatmap cell
Advanced Filters Presence and Behavior
Given the detail panel is open Then filter controls are visible for: date range, issue tag (multi-select), sentiment, tour type, and agent team When any filter is changed Then results update within 1 second and the total result count refreshes And AND logic applies across different filter types; OR logic applies within multi-selects And active filters are shown as removable chips And a Clear All action resets all filters in one step
Full-Text Search Within Detail
Given zero or more filters are applied When the user enters at least 2 characters in the search input Then results are limited by case-insensitive full-text match across raw comments and issue tags And matched terms are highlighted in the comment text And the search respects all current filters And if no results match, an empty-state message is shown And input is debounced by 300 ms to prevent excessive requests
Pagination and Sorting Controls
Given the detail panel shows more results than one page Then results are sorted by timestamp (newest first) by default And the user can change sort to timestamp (oldest first) and sentiment (most negative to most positive) And pagination supports page sizes of 25, 50, or 100 and next/previous navigation And the UI displays the total count and current range (e.g., 1–25 of N) And navigating pages does not repeat or skip records And page number, page size, and sort selection persist when navigating away and back within the listing
Export Filtered Results
Given any combination of filters and/or search is active When the user initiates Export Then only the currently filtered results are exported And the user can choose CSV or XLSX format And each row includes: timestamp, room, issue tag(s), raw comment, sentiment label and score, tour type, visit segment, buyer type, agent team, visit ID And exports are capped at 10,000 rows; larger sets trigger an async export with user notification And the exported file name includes the listing identifier and export timestamp And permission checks prevent export by unauthorized users
Filter and View State Persistence
Given the user has set filters, search, sort, page number, and page size When the user navigates away and returns to the detail view within 24 hours in the same browser session Then the prior state is automatically restored for that listing And a Reset All control clears the saved state And state is preserved across browser back and forward navigation And a page refresh does not clear the state
Shareable, Permissioned URLs
Given a current combination of filters, search, sort, and pagination When the user copies a share link Then the generated URL encodes the current state and reproduces identical results when opened And only users with access to the listing and team can view; unauthorized users see an access denied screen And the URL contains no PII or raw comments, using only opaque identifiers and encoded state And shared links expire after 7 days by default unless renewed And link opens are audit-logged with user, time, and listing
Shareable Report & Export
"As an agent, I want to share a concise heatmap report with my seller so that they can quickly understand and approve the most impactful fixes."
Description

Add one-click export of the current heatmap and top issues into a branded PDF/PNG and a secure, time-limited web link for sellers. Include summary insights, top three fix recommendations, and before/after trend snapshots. Ensure exports respect active filters and masking rules and are optimized for email and print.

Acceptance Criteria
One-Click Branded PDF Export
Given an authenticated listing agent is viewing the Issue Heatmap with active filters When the agent selects Export > Branded PDF Then a PDF is generated within 8 seconds and downloads automatically And the PDF contains the current heatmap, top issues list, summary insights, top three fix recommendations, and before/after trend snapshots And the PDF applies brokerage branding (logo and primary color) configured for the listing And the filename follows <address>-heatmap-<YYYYMMDD>-<HHmm>-v1.pdf And total file size is ≤ 5 MB for listings with ≤ 10 rooms and ≤ 50 issue tags And images render at ≥ 300 DPI and text remains selectable And the first page footer displays export time (with timezone) and an applied filter summary
One-Click PNG Export of Current Heatmap View
Given an authenticated listing agent is viewing the Issue Heatmap with active filters When the agent selects Export > PNG Then a single PNG of the current heatmap viewport is generated within 5 seconds and downloads automatically And the image includes the heatmap legend and room labels visible on screen And the image pixel density is 2x the viewport (retina) with maximum width 2200px And file size is ≤ 2 MB And the filename follows <address>-heatmap-view-<YYYYMMDD>-<HHmm>.png And transparent outer margins of 24px are applied for clean embedding
Secure Time‑Limited Share Link for Sellers
Given an authenticated listing agent chooses Share > Create Link When the agent selects an access duration (default 24 hours) and clicks Create Link Then the system returns an HTTPS URL containing a random token with ≥ 128 bits of entropy And the link loads without authentication and displays the seller-facing report equivalent to the PDF And the link expires precisely at the configured duration and thereafter returns HTTP 410 Gone And the agent can revoke the link at any time; after revocation the URL returns HTTP 410 Gone And all traffic to the link is over TLS 1.2+ and is not indexed (robots noindex, nofollow) And opens are logged with first_open_at, last_open_at, and opens_count (IP addresses masked to /24)
Filter and Masking Fidelity in Exports
Given the Issue Heatmap is filtered by a date range, room subset, and issue tags with masking enabled for buyer identities and free-text comments When the agent generates a PDF, PNG, or Share Link Then the exported counts and visuals exactly match the on-screen heatmap and top issues (0 variance allowed) And masked identities and redacted text remain masked in all exported formats And private/internal-only fields are excluded by default And the export header includes a human-readable summary of applied filters and masking state And a QA check comparing on-screen totals to exported totals passes for at least 10 representative datasets
Summary Insights and Top Three Fix Recommendations
Given visit and feedback data are available for the listing When the agent generates a PDF or Share Link Then a Summary section displays total visits, overall sentiment score, top 5 rooms by impact, and top 5 issues by frequency/impact And a Fix Recommendations section lists exactly three prioritized actions with estimated impact range and relative effort (Low/Med/High) And each recommendation cites supporting evidence (rooms/issues/tags) visible in the report And if fewer than three viable recommendations exist, only the available ones are shown with an "Insufficient data for additional recommendations" note And recommendation ordering is deterministic for identical input data
Before/After Trend Snapshots
Given there is data in at least two time periods (e.g., previous 14 days vs. latest 14 days) for the listing under the current filters When the agent generates a PDF or Share Link Then a Trends section displays before/after snapshots with labeled date ranges and deltas for interest score, top issue frequency, and room impact And charts use the same filters as the main heatmap and update accordingly And if no prior period exists, the section displays "No prior period available" without error And deltas are color-coded (improved/worsened) and include percentage and absolute change
Email and Print Optimization
Given a PDF or PNG has been generated When attached to an email or printed on Letter/A4 Then PDF attachment size is ≤ 5 MB and PNG size is ≤ 2 MB by default settings And layout fits Letter and A4 with ≥ 0.5 in margins and page numbers included And color palette remains legible in grayscale and meets WCAG AA contrast for text ≥ 12pt And clickable links (e.g., Share Link URL) are active in the PDF And print output contains no clipped content or orphaned headings across page breaks
Near-Real-Time Updates & Caching
"As a busy agent during an open house, I want the heatmap to update almost immediately as feedback comes in so that I can adjust talking points in real time."
Description

Implement incremental processing and caching so new feedback updates heatmap scores within seconds without overloading services. Use event-driven pipelines to recompute only affected room-issue pairs and push updates to subscribed clients via websockets. Include cache invalidation, rate limiting, and graceful degradation when upstream services are delayed.

Acceptance Criteria
P95 Near-Real-Time Heatmap Update After New Feedback
Given a new feedback event with room-issue tags for listing L and at least one subscribed client When the event is ingested by the event pipeline Then the end-to-end latency from event ingest timestamp to WebSocket message receipt on the client is <= 3s at P95 and <= 8s at P99 under load of up to 5 events/sec per listing and 50 concurrent subscribers And the WebSocket payload includes only impacted room-issue pairs and an updatedAt timestamp that is strictly monotonic per pair And no REST polling is required for subscribed clients to see the update
Incremental Recompute Limited to Affected Room-Issue Pairs
Given a feedback event that affects K distinct room-issue pairs for listing L (where 1 <= K <= 10) When the recompute job runs Then exactly K room-issue aggregates are recomputed and persisted, and no unaffected pairs are recalculated And no full-listing recompute is triggered for K <= 10 And the before/after values for unaffected pairs remain byte-identical And the total compute time grows O(K) with K (±20%) as observed over 30 trials
WebSocket Subscription, Delivery, and Recovery
Given a client subscribed to listing L’s heatmap via WebSocket When updates occur Then each update is delivered as a diff message within the latency SLA and contains {sequenceNumber, version, lastUpdatedAt} And the server sends heartbeats every 30s and declares the connection dead after 2 missed heartbeats And upon reconnect with lastAckedSequence, the client receives all missed diffs in order or a full snapshot if the gap > 100 diffs, within 2s And message delivery is at-least-once with client-side de-dup using sequenceNumber
Cache Invalidation and Snapshot Freshness
Given a cached heatmap snapshot for listing L When a new feedback event creates/updates/deletes tags for specific room-issue pairs Then only the cache entries for those pairs are invalidated/updated within 200ms of persistence And subsequent reads return updated values without requiring a full cache flush And under steady-state read load, cache hit rate is >= 80% over a 10-minute window And when upstream is healthy, no client is served a snapshot older than 60s (stale=false)
Rate Limiting and Backpressure Coalescing
Given bursts of feedback events or many subscribers for listing L When publish rates exceed thresholds (e.g., > 5 heatmap updates/sec per connection or > 10 subscription attempts/min per client per listing) Then the system applies rate limiting: REST endpoints return 429 with retry-after, and WebSocket sends a rate_limited control frame And updates are coalesced so that at most 1 heatmap diff/sec is pushed per connection during throttle while ensuring the latest state is delivered within 5s after the burst subsides And no more than 100 pending update messages are queued per connection (newest-drop policy)
Graceful Degradation During Upstream Delays
Given upstream services (tagger/summarizer or message broker) are delayed or unavailable When the delay exceeds 5s Then clients receive the last-known cached heatmap snapshot with stale=true and lastUpdatedAt included, within 2s And heartbeats continue over WebSocket, and no more than 1% of client requests return 5xx during the incident window And upon recovery, the latest snapshot/diffs are pushed within 3s and stale reverts to false without requiring client refresh

Snap‑to‑Task

Convert any tagged issue in a photo directly into an actionable task with location, scope, and suggested vendors prefilled. Auto-links to Impact Rank and ROI Gauge, turning raw feedback into clear next steps sellers can approve with one tap.

Requirements

Photo Tagging & Annotation UX
"As a listing agent, I want to quickly mark issues in a photo and tag their room and type so that I can convert them into precise tasks without manual data entry."
Description

Provide in-app camera and photo import with an annotation interface to mark issues (tap-to-tag, bounding boxes) and select standardized tags (issue type, room/location, severity). Associate each photo with a listing/showing, capture EXIF/location when available, and support offline capture with later sync. Ensure quick, low-friction tagging that normalizes inputs for downstream automation.

Acceptance Criteria
Camera capture with instant annotation overlay
- Given the user has an active listing/showing context and camera permission granted When the user taps "Camera" and snaps a photo Then the annotation canvas opens within 1 second with the captured photo and tools (tap-to-tag, bounding box) visible - Then the photo is auto-associated to the current listing and showing - And the user can add at least 3 separate issue tags on the photo, each with an independent bounding box - And the user can move/resize bounding boxes with <100 ms interaction latency - And tapping Save persists the photo and all tags locally (if offline) or to server (if online) and returns to the previous screen
Photo import with per-image tagging
- Given the user selects 1–10 images from device gallery When the import flow starts Then each imported photo opens in the annotation flow sequentially with the same tools as camera capture - And per-photo default association is the active listing/showing; the user can change it before saving - And the user can skip a photo, which is saved without tags only after confirming "Save without tags" - And batch save completes and shows a toast "Saved X of Y" with failures clearly listed - And images larger than 15 MB are downscaled client-side before upload without altering tag coordinates
Standardized issue tagging with bounding boxes and validation
- Given a controlled vocabulary for Issue Type, Room/Location, and Severity (Low/Medium/High) When the user adds a tag Then the user must select Issue Type and Room/Location before Save is enabled - And Severity defaults to Medium and can be changed - And a bounding box is required for any tag marked as "surface/area" issue types; a point tag is allowed for "fixture/object" types - And selecting "Other" Issue Type requires a note of 5–140 characters - And saved tag data conforms to schema: {issueTypeId, roomId, severity, geometry:{type:box|point, coords}, note?} with normalized IDs (no free text)
EXIF/location capture and room/location fallback
- Given the photo has EXIF GPS with accuracy ≤50 m When the photo is saved Then GPS lat/long and timestamp are stored with the photo record - And if EXIF GPS is absent or older than 24 hours Then no GPS is stored and the user-selected Room/Location is required - And device timezone is recorded and UTC timestamp is computed - And orientation is normalized so bounding boxes align after any rotation
Offline capture and deferred sync to listing/showing
- Given the device has no network connectivity When the user captures/imports and saves tagged photos Then all data is written to an encrypted local queue and marked "Pending Sync" in the UI - And the items auto-sync within 10 seconds of connectivity restoration - And if the original listing/showing is no longer active at sync time Then the user is prompted to re-associate before finalizing - And retries back off exponentially up to 6 attempts with a user-visible error after the final failure - And no data loss occurs across app restarts; pending items persist
Low-friction tagging performance and tap-efficiency
- Given a baseline device class defined by Product (mid-tier Android 2023 and iPhone 12 equivalent) When a user takes a photo and adds a single tag with Issue Type and Room/Location Then the median time (p50) from shutter to saved first tag is ≤6 seconds and p95 ≤10 seconds - And the number of taps to create the first complete tag (including bounding box) is ≤3 when default Severity applies - And median interaction latency for drawing/moving a bounding box is ≤100 ms - And the first tag form shows Issue Type and Room/Location pickers pre-focused to recent selections to minimize input
Task Auto‑Prefill Engine
"As an agent, I want tasks to be prefilled from photo tags so that setup takes seconds and I avoid inconsistent task details."
Description

Transform tagged issues into standardized, actionable tasks by auto-filling title, location, scope of work, unit count, material notes, default due date, and priority. Derive vendor category, cost/time ranges, and recommended next steps using rules and historical data. De-duplicate identical issues across photos, handle uncertain inputs with confidence indicators, and allow quick edits before saving.

Acceptance Criteria
Auto-fill Core Task Fields
Given a tagged issue is selected from a photo within a listing, When Snap‑to‑Task is invoked, Then the task is prefilled with Title, Location, Scope of Work, Unit Count, Material Notes, Default Due Date, and Priority. Then Default Due Date is set to within 3 calendar days of creation unless a listing SLA exists, in which case it uses the SLA-defined due date. Then Priority is mapped from Impact Rank: High if Impact Rank ≥ 8/10, Medium if 5–7, Low if ≤ 4. Then Unit Count equals the counted instances from the tag metadata; if absent, default to 1. Then all prefilled fields are visible in the edit modal within 500 ms of invocation.
Derive Vendor, Estimates, and Next Steps
Given the issue category and scope are identified, When the task is prefilled, Then Vendor Category is derived using rules and historical data; if no historical data exists, a rule-based fallback is applied and labeled as such. Then Cost Estimate and Time Estimate are presented as ranges with min–max and units (currency code and hours), each accompanied by a confidence percentage. Then a Recommended Next Step is included with at least one actionable option (e.g., Request quotes) and a one-tap seller approval control. Then the estimates display their basis as Historical (with sample size ≥ 10) or Rule-based. Then the task auto-links to Impact Rank and ROI Gauge for downstream prioritization.
Cross-Photo De-duplication
Given multiple photos contain the same issue in the same listing and room, When Snap‑to‑Task generates tasks, Then only one task is created for that issue. Then duplicates are detected when the issue label matches and semantic similarity ≥ 0.80 and the inferred room-level location matches. Then the resulting task aggregates unit count across merged items and stores references to all source photos and tags. Then the UI indicates the merge (e.g., Merged from X photos) and allows the user to split into separate tasks before saving.
Confidence Indicators and Uncertain Inputs
Given fields are auto-derived, When presenting prefilled data, Then each derived field shows a confidence score (0–100%). Then any derived field with confidence < 60% is flagged Needs review and requires explicit user confirmation before saving. Then if a field cannot be inferred, it remains blank with a tappable placeholder and an inline prompt to complete it. Then the Save action is disabled until all required fields pass validation and any Needs review fields are confirmed.
Pre-save Quick Edits and Recalculation
Given the edit modal is open, When the user changes Location or Scope, Then Vendor Category and Cost/Time estimates re-compute within 300 ms and their confidence scores update accordingly. When the user overrides Priority, Then the override persists and is not auto-updated by later recalculations. When the user edits Unit Count, Then Cost/Time ranges scale proportionally when estimates are per-unit; otherwise they remain unchanged and are labeled fixed-scope. Then Cancel discards changes and Save persists the task, with P95 save latency ≤ 1 second on Wi‑Fi.
Validation and Traceability
Given a user attempts to save, When required fields are missing (Title, Location, Scope, Priority, Due Date), Then Save is blocked and inline validation messages identify each missing field. Then Due Date must be today or later; otherwise an error is shown. Then Unit Count must be a positive integer; Material Notes accept up to 500 characters with a visible counter. Then upon successful save, the task stores source photo IDs and tag IDs; for merged tasks, all contributing references are retained. Then an audit trail marks which fields were auto-filled versus user-edited, including timestamps and user ID.
Vendor Suggestion & Scheduling Prep
"As a seller, I want suggested vendors preattached to tasks so that I can approve work confidently without researching providers."
Description

Recommend vendor candidates per task based on category, service radius, availability, pricing, ratings, insurance, and past performance. Support agent-managed preferred vendor lists and brokerage partnerships. Prefill vendor outreach packets with scope, photos, and location; propose tentative time windows; generate a secure vendor preview link. Provide fallbacks when no vendors match.

Acceptance Criteria
Auto-Filter and Rank Vendor Candidates
Given a task with category, property location, and desired service window, When the system searches for vendors, Then only vendors matching the task category are considered. Then vendors whose service radius does not cover the property geolocation are excluded. Then vendors lacking valid insurance on file (unexpired COI) are excluded. Then vendors with average rating below the configured minimum threshold (default 4.0/5) are excluded unless the agent enables "Include below-threshold". Then vendor availability is checked against the desired service window; vendors with overlap are labeled Available; those without overlap are labeled Request Availability and ranked lower. Then vendors are ranked using a weighted score (past performance 40%, rating 30%, pricing competitiveness 20%, availability fit 10%). Then each returned vendor includes a "why selected" explanation listing the matched criteria. Then the default results list shows the top 5 ranked vendors with a control to View All. Given no vendors meet the hard constraints, When the search completes, Then the system displays a zero-match state with options to Expand Radius (+5 miles), Relax Rating (-0.5), Remove Availability Filter, Invite Vendor, and Request Bids to Marketplace.
Preferred Vendors and Partnerships Prioritization
Given the agent has a Preferred Vendors list, When vendor candidates are generated, Then preferred vendors that meet hard constraints appear above non-preferred vendors. Then a "Preferred only" toggle filters the list to only preferred vendors. Given the brokerage has partner vendors, When candidates include partners, Then those vendors are labeled Partner and are ranked below Preferred but above Other when scores are within ±5%. Then the agent can exclude Partner vendors via a toggle; excluded partners are removed from the list. Then the rank explanation displays the source (Preferred, Partner, Other). Then any manual reorder by the agent is persisted to the task.
Prefilled Vendor Outreach Packet Generation
Given the agent selects one or more vendors and clicks Prep Outreach, When the packet is generated, Then it includes task scope summary, annotated photos, property location, proposed time windows, budget range (if set), and response deadline. Then seller PII (name, phone, email) is hidden; vendor replies are routed through the platform. Then photos are resized to <= 2048px, stripped of EXIF, and watermarked with Task ID. Then the packet is presented in a preview modal for agent review and edit prior to sending. Then a draft of the packet is auto-saved and can be resumed later.
Tentative Time Window Proposal Logic
Given seller and agent availability windows and property timezone are set, and vendor availability data is present, When proposing times, Then the system proposes at least 3 non-overlapping 2-hour windows that intersect seller, agent, and vendor availability within the next 7 days by default. Then the agent can edit, add, or delete proposed windows before sending. Then time windows account for vendor travel time based on service radius and typical travel durations; infeasible windows are not proposed. Given no common overlap exists, When proposing times, Then the system offers a Request vendor best times option and proposes the top 3 windows satisfying seller and agent only.
Secure Vendor Preview Link
Given an outreach packet exists, When Generate secure preview link is invoked, Then a unique tokenized HTTPS URL with at least 128 bits of entropy is created and expires in 7 days by default. Then the preview is read-only and includes scope, photos, location map, and proposed time windows; vendor cannot see other candidate vendors. Then access attempts are logged (timestamp, IP, user agent) and visible to the agent on the task. Then the agent can revoke the link; revoked links return HTTP 410 and are inaccessible. Then the page sets noindex and prevents directory listing; the URL is not guessable via sequential IDs.
No-Match Fallbacks and Data Gaps Handling
Given zero vendors match after applying hard constraints, When the search completes, Then the system displays actionable fallbacks: Expand radius, Relax rating threshold, Relax availability filter, Invite a vendor, and Request marketplace bids. Then choosing a fallback re-runs the search with the selected adjustment and records the fallback chosen in the task audit trail. Given vendor data (pricing, ratings, availability, insurance) is stale or unavailable, When building the candidate list, Then the system labels the attribute as Unknown with a staleness timestamp and does not exclude the vendor solely due to unknowns unless the agent sets "Require". Then the agent can override and include a vendor that would otherwise be excluded, and the override is recorded.
Impact Rank & ROI Auto‑Linking
"As a broker-owner, I want each task to show its impact and ROI so that I can prioritize fixes that move the listing faster at the best return."
Description

Automatically link each generated task to the listing’s Impact Rank and ROI Gauge by sending task metadata to existing services. Compute estimated buyer sentiment lift and ROI contribution, and display badges on the task card. Refresh metrics as new showings and feedback arrive, and maintain a change log for transparency and reporting.

Acceptance Criteria
Auto-link on Task Creation
Given a user creates a task via Snap‑to‑Task with a valid listingId and tagged issue metadata When the user taps Save Then the system sends taskId, listingId, issueType, roomLocation, scope, estimatedCost, and createdBy to Impact Rank and ROI Gauge services within 2 seconds at P95 And then the system receives correlationIds and current metrics, and persists them in the task record And then the task status shows "Linked" on success And if no response is received in 5 seconds Then the request is queued and the task status shows "Linking…" without blocking task creation
Badge Display on Task Card
Given a task has linked metrics When the task card is rendered in the listing workspace Then show an Impact Rank badge with rankPosition (1–100) and a 3‑state trend arrow (up, flat, down) and a ROI badge with roiPercent (1 decimal) and roiDollars (localized currency, no cents) And then the badges show a "Last updated" tooltip with ISO 8601 timestamp And then badges are keyboard‑focusable and expose aria‑labels describing current values And then badge rendering adds no more than 100 ms to card render time at P95
Sentiment Lift & ROI Calculation Persistence
Given the metrics services return sentimentLiftPercent and roiContribution (percent and dollars) When the response is saved Then values are rounded (lift to 0.1%, ROI% to 0.1, dollars to whole) and stored on the task And then these values are visible in task detail and included in analytics export And then unit tests verify correct rounding and storage for at least 10 representative cases
Metrics Refresh on New Feedback
Given new showing feedback is ingested for the listing When Impact Rank and ROI Gauge recompute metrics Then all linked tasks refresh badges and stored values within 60 seconds of the recompute event at P95 And then the task shows an "Updated X min ago" timestamp reflecting the refresh And then updates are idempotent and rate‑limited to at most once every 30 seconds per task
Change Log Audit Trail
Given any metric field on a task changes (rankPosition, trend, roiPercent, roiDollars, sentimentLiftPercent) When the change is saved Then a change log entry is appended with timestamp (UTC), source event (e.g., recompute, backfill, manual re‑link), previous value, new value, and correlationId And then the change log is visible in task details and filterable by field and date range And then users with permission can export the change log as CSV for the past 180 days
Failure Handling & Retry
Given the Impact Rank or ROI Gauge service is unavailable or returns a 5xx error When linking or refreshing metrics Then the system retries up to 5 times with exponential backoff over 15 minutes and records each attempt And then the task shows a non‑blocking "Pending metrics" state with a retry count and tooltip explaining the issue And then when the service recovers, queued requests are processed within 2 minutes and task status returns to "Linked" And then failures after max retries raise an alert and are logged for support triage
Backfill and Re‑Sync for Existing Tasks
Given the feature is enabled for an existing listing with pre‑existing tasks When the backfill job runs Then 99% of existing tasks are linked and populated with metrics within 24 hours and the remainder within 48 hours And then a backfill progress indicator is available per listing showing counts of linked, pending, and failed And then any tasks with stale or mismatched listingId are flagged and excluded with reasons logged and exportable
One‑Tap Seller Approval Flow
"As a seller, I want to approve recommended tasks with one tap so that improvements proceed quickly without back-and-forth."
Description

Deliver grouped task summaries to sellers (push/email) with costs, impact, ROI, and vendor suggestions. Enable approve/decline/edit per task and “approve all,” enforce budget caps and expiration windows, and block vendor contact until consent is captured. Record digital approvals with an auditable trail and notify agents of decisions in real time.

Acceptance Criteria
Seller receives grouped task summary via push/email
Given a seller has at least one pending task bundle awaiting approval When TourEcho triggers delivery of the One‑Tap Seller Approval summary Then the seller receives both a push notification and an email within 60 seconds And the message includes property address, bundle name/ID, total estimated cost, and projected aggregate ROI And each task row displays title, room/location, scope, estimated cost range, Impact Rank, ROI Gauge, and up to 3 vendor suggestions And tapping the deep link opens the approval screen showing the identical task set and totals
Per‑task approve/decline/edit and Approve All actions
Given the seller opens the One‑Tap Seller Approval screen Then each task shows controls: Approve, Decline, Edit (scope, vendor selection, cost within provider min/max) When the seller taps Approve on a task Then the task status changes to Approved – Pending Vendor and edits for that task are locked When the seller taps Decline on a task Then the seller must select a reason or enter a note (>=5 characters) and the task status changes to Declined When the seller taps Approve All Then all non‑expired tasks on the screen transition to Approved – Pending Vendor and the totals update accordingly And the running totals (approved, declined, remaining) refresh in under 1 second
Budget cap enforcement across approved tasks
Given a listing‑level budget cap is configured And the approval screen shows the current utilized and remaining budget When the seller attempts to approve one or more tasks that would exceed the cap Then the system blocks the action, displays "Over budget by $Y" where Y is the overage, and disables Confirm until the total fits the cap When the selected approvals fit within the cap Then the approvals succeed and the budget meter updates to reflect new utilized and remaining amounts
Approval expiration window enforcement
Given each task includes an approval/quote expiration timestamp in the seller’s local timezone When now > expiration Then the Approve control for that task is disabled and the task is labeled Expired – Refresh Needed When now is within 24 hours of expiration Then the task shows an expiring warning badge and tooltip with the exact expiration When the seller requests a refresh on an expired/expiring task Then a refresh request event is recorded and the agent is notified with the affected task IDs
Vendor contact gated by explicit seller consent
Given vendor suggestions are displayed but no seller consent is recorded Then the system must not send any vendor outreach for those tasks When the seller approves a task and confirms Then the system records a consent flag for that task and only then initiates vendor contact according to the selected vendor And if a task is declined or expires before consent is recorded Then no vendor contact is initiated and the agent is notified of the decision
Digital approval audit trail creation and retention
Given the seller performs an approval, decline, edit, or Approve All action When the action is confirmed Then an immutable audit event is appended capturing: seller ID, name, email, auth method, UTC timestamp (ISO‑8601), IP, device type, app/web version, listing ID, task ID(s), prior values, new values (scope/cost/vendor/status), budget cap at action time, totals, and consent flag And authorized agents can retrieve the audit trail for the listing and see an ordered timeline of events And exporting the audit trail to PDF/CSV preserves the same fields and timestamps
Real‑time agent notifications of seller decisions
Given an agent is assigned to the listing When a seller approves, declines, or edits any task (including Approve All) Then the agent receives a push notification within 30 seconds containing property address, counts of approved/declined/edited, and updated budget utilization And an email notification is delivered within 2 minutes with a detailed breakdown and deep link to the updated task list
Feedback Traceability & Room Mapping
"As an agent, I want tasks tied back to specific feedback and rooms so that I can show sellers exactly why a fix matters."
Description

Maintain bi-directional links between tasks and originating showing feedback, including room-level objections and photo evidence. Surface thumbnails and objection excerpts on the task, and provide deep-links between the showing timeline and task detail. Support multiple issues per room, merge/split operations, and exportable summaries for client updates and disclosures.

Acceptance Criteria
Bi‑Directional Linkage Between Task and Feedback
Given a task created from a tagged feedback issue via Snap‑to‑Task, When the task detail loads, Then it displays the originating showing ID, feedback excerpt truncated to 200 characters, room name, and up to 3 photo thumbnails sized to at least 200px on the long edge. Given the task detail, When the user taps "View Origin", Then the app opens the originating showing at the exact feedback item anchor within 1 second on a 20 Mbps connection. Given a feedback item with one or more linked tasks, When the feedback item is viewed, Then a "Linked Tasks" badge with the correct count is shown, and selecting a linked task opens that task detail within 1 second. Given deep-links between task and feedback, When a user lacks permission to view either side, Then the link resolves to an access-gated screen with no sensitive data leakage and a request-access option.
Room‑Level Mapping with Photo Evidence
Given a listing’s Room Map, When a task is linked to a room, Then a pin or chip appears on that room labeled with the task title and Impact Rank, and selecting it shows the feedback excerpt and photo thumbnails. Given a task with photo evidence, When thumbnails are tapped from Room Map or task detail, Then a viewer opens showing the full-resolution image and metadata (timestamp, source showing, annotator) and a back navigation returns to the prior context. Given multiple photos on a task, When the task loads, Then the first three appear as thumbnails and a "+N" indicator appears if more exist. Given a task linked to “Whole Home” instead of a specific room, When viewed on Room Map, Then it appears in a global strip and not within any single room.
Deep‑Link Navigation Between Showing Timeline and Task Detail
Given a shared deep-link to a feedback anchor, When opened on web or mobile, Then it authenticates the user and routes to the Showing Timeline scrolled to the anchor, highlighting it for 3 seconds. Given a shared deep-link to a task ID, When opened, Then it routes to task detail with preserved filter context (if present in query params) and scroll position memory on back navigation. Given an invalid or deleted target, When a deep-link is opened, Then a 404-style screen appears with the ID, timestamp, and support link; no sensitive content is exposed. Deep-link load time is under 1.5 seconds (P95) on a 20 Mbps connection for datasets up to 500 showings.
Multiple Issues Per Room Support
Given a room with N issues (N ≤ 50), When viewed on Room Map or in room detail, Then all issues are listed with unique IDs, titles, Impact Rank, and status; sorting by Impact Rank is default. Given multiple tasks in the same room, When one is updated, Then the room’s list reflects the change in under 2 seconds without page reload. Given issues spanning multiple rooms, When a user assigns a task to a different room, Then the Room Map updates pin location and the feedback link remains intact. Given duplicate detection is enabled, When a new issue is created with >85% text similarity in the same room, Then the user is prompted to merge instead of creating a duplicate.
Merge Operations Preserve Traceability
Given two or more tasks/issues selected in the same listing, When the user merges them, Then the resulting task contains the union of feedback links, photos (deduplicated by hash), a single target room (explicitly selected), and an audit log entry listing merged IDs and actor. Given pre-existing deep-links to any merged task, When those links are opened, Then they redirect (HTTP 301 on web, in-app redirect on mobile) to the surviving task and display a “Merged from [IDs]” note. Given a merge action, When completed, Then no feedback link is orphaned; each originating feedback item still shows exactly one link to the surviving task. Merge execution time is under 3 seconds for up to 10 tasks with ≤30 photos total.
Split Operations Maintain Evidence Integrity
Given a task with multiple issues/evidence, When the user selects Split and chooses a subset of photos and feedback excerpts for a new task, Then the new task is created with correct room assignment, unique ID, and bi-directional links to the chosen feedback items. Given a split, When completed, Then the original task retains only the remaining evidence, and both tasks include an audit log referencing the split operation and actor. Given existing deep-links to the original task, When they are opened, Then they still resolve to the original task; no links silently change to the new task. Split execution time is under 3 seconds for tasks with up to 30 photos and 10 feedback anchors.
Exportable Summaries for Client Updates and Disclosures
Given a selection of tasks (up to 200), When the user exports Client Update (PDF) or Disclosure Summary (CSV), Then the file includes for each task: room name, title, feedback excerpt (≤200 chars), thumbnail (PDF only), Impact Rank, ROI Gauge value, source showing IDs (anonymized buyer), and deep-link/ID references. Given privacy requirements, When exporting, Then buyer-identifying information is excluded or anonymized and a “Generated on [date/time, timezone]” footer is added. Given the export command, When run, Then the file is generated in under 10 seconds (P95) and is downloadable and shareable via link with 7-day expiration. Given a merged/split history, When exporting, Then the audit trail notes (“Merged from…”, “Split from…”) are included under each affected task.
Role-Based Access & Data Privacy Controls
"As a broker-owner, I want access controls and redaction on shared task data so that client privacy is protected while vendors get what they need."
Description

Enforce role-based permissions for creating, editing, and approving tasks. Limit vendor visibility to minimum necessary details with redaction of personal or sensitive information. Encrypt photos and task data at rest and in transit, log all access for compliance, and offer configurable retention and revocation for shared artifacts.

Acceptance Criteria
Vendor-Scoped Task View Is Redacted
Given a task contains seller_name, seller_phone, seller_email, agent_notes, lockbox_code, showing_schedule, and photos with EXIF metadata and human faces When a Vendor-role user retrieves the task via UI or API Then the payload includes only task_id, property_address, approved_scope, due_window, budget_cap, redacted_attachments, and messaging_channel_id And the payload excludes seller_name, seller_phone, seller_email, agent_notes, lockbox_code, showing_schedule, and Impact Rank history And all image responses have faces blurred and EXIF metadata stripped And phone numbers and emails detected in descriptions or image text are masked (e.g., +1-***-***-12, e***@example.com) And direct access to excluded fields or original images returns 403 and creates an audit_log entry with action=deny
RBAC: Task Creation, Edit, Approve, Complete
Given a ListingAgent is authenticated When they create a Snap‑to‑Task from a photo Then the task is created in Draft with vendor_visibility=false and audit_log action=create Given a Seller is authenticated for the property When they approve the task Then task.status=Approved, vendor_visibility=true, and audit_log action=approve Given a Vendor is authenticated for the task When they attempt to edit scope, budget, or due_window Then the action returns 403 and no changes are persisted Given a BrokerOwner is authenticated When they approve or revoke approval Then the action succeeds and audit_log records previous_status and new_status Given a CoListingAgent is authenticated When they edit a task before approval Then the edit succeeds and sets requires_reapproval=false When they edit after approval Then the edit is saved as a new version and sets requires_reapproval=true and vendor_visibility=false until approved
Encryption: Photos and Tasks In Transit and At Rest
Given any client request to API or web When the request uses HTTP Then it is redirected to HTTPS or blocked, and HSTS is enabled with max-age>=15552000 Given any network inspection during upload or download of photos or tasks When TLS is negotiated Then TLS version>=1.2 and strong ciphers are used Given object storage and primary databases When configuration is validated Then server-side encryption AES-256 with KMS is enforced, and keys are rotated every 90 days Given a pre-signed URL is issued for a permitted download When inspected Then expiry<=10 minutes and scope=single-object; requesting after expiry returns 403 Given at-rest data is scanned When raw disk access is attempted without decryption Then photo and task content are unreadable
Comprehensive Access Logging and Audit Export
Given any user views, creates, updates, approves, downloads, shares, or deletes a task or photo When the action completes (success or failure) Then an immutable audit_log entry is written with user_id, role, action, entity_type, entity_id, timestamp_utc, ip, user_agent, outcome, and fields_changed (names only) Given OrgAdmin or BrokerOwner requests an audit export for a date range up to 31 days with <=100,000 events When the export is generated Then a CSV is delivered within 60 seconds and includes a hash_chain_valid=true verification record Given an audit_log entry exists When an attempt is made to modify or delete it via API Then the request is rejected with 405 and an additional audit_log action=attempted_tamper is recorded Given a redaction event occurs (e.g., PII masked, image face blur) When logged Then audit_log includes redaction_type, fields_redacted, and actor_id
Retention Policies and Share Revocation
Given OrgAdmin sets retention: photos=90d, tasks=365d, vendor_shares=30d When artifacts exceed their TTL Then they are hard-deleted, all access returns 410 Gone, and an audit_log action=auto_purge is recorded; audit logs themselves are retained per compliance=7y Given an agent revokes a previously shared vendor link When the vendor attempts access after 60 seconds Then the link is invalid and returns 410; no thumbnails or cached previews are returned Given OrgAdmin shortens a retention policy When applied Then items that now exceed TTL are queued for deletion within 15 minutes
Vendor Invite: Scoped Access and Expiry
Given a Vendor is invited to a single task for a property When the Vendor authenticates Then listing other tasks or properties returns empty, and any request outside the assigned task returns 403 Given a Vendor access token is issued When time passes Then the token expires at the earlier of 14 days or task completion+48h; after expiry, requests return 401 and audit_log action=expired_token Given an agent removes a Vendor from a task When removal is saved Then all active sessions and tokens for that Vendor on that task are invalid within 60 seconds Given a Vendor requests Impact Rank or ROI Gauge beyond the assigned task When the API is called Then the response is 403 with no data leakage

Before/After Proof

Side-by-side comparisons align new snaps with prior angles to confirm that fixes resolved the original objections. Auto-updates sentiment and impact scores, generates seller-friendly progress cards, and equips agents with visual evidence to support price holds or relist decisions.

Requirements

Guided Angle Alignment
"As a listing agent on-site, I want live guidance to match the original photo’s angle so that my after shots are directly comparable and credible proof that a fix addressed the objection."
Description

Provides a live camera “ghost overlay” of the original objection photo to guide agents in matching angle, distance, and framing for accurate before/after comparisons. Uses on-device computer vision to auto-align, de-skew, and crop, displaying a real-time match score and haptic cue when alignment meets threshold. Supports iOS/Android, low-light enhancement, and manual nudge controls for edge cases. Captures EXIF, timestamp, and room/objection IDs for context. Processes frames under 300 ms to maintain 30+ FPS preview. Stores both original and aligned variants for auditability and downstream analysis within TourEcho’s Before/After Proof pipeline.

Acceptance Criteria
Live Ghost Overlay with Match Score
Given an original objection photo is selected and Guided Angle Alignment is opened on a supported device When the live camera preview starts Then a semi-transparent ghost overlay of the original photo appears within 100 ms and is anchored to the preview And a numeric match score (0–100) is visible and updates at least 10 times per second in response to camera motion And the overlay and score remain synchronized with the preview with no visible tearing or freezes across iPhone 12 (iOS 15+) and Pixel 6 (Android 12+) reference devices
Auto-Alignment Accuracy (De-skew & Crop)
Given a curated test set of 100 scenes with known original photos When the device is aimed within ±20° rotation and ±30% scale of the original framing Then the on-device CV auto-aligns, de-skews, and crops such that the median reprojection error ≤3 px and P95 ≤6 px at 1080p output And residual rotation error ≤2° at P95 And the resulting crop retains ≥95% of the original region-of-interest content without clipping key features And these results are achieved on iPhone 12 (iOS 15+) and Pixel 6 (Android 12+) reference devices
Haptic Cue on Alignment Threshold
Given the match score threshold is set to 85 When the live match score crosses from below to ≥85 Then a single haptic notification is emitted within 100 ms of crossing And no additional haptic fires again until the session ends or the score drops below 85 and recrosses the threshold And haptic feedback intensity conforms to platform guidelines on iPhone 12 (iOS 15+) and Pixel 6 (Android 12+) reference devices
Performance: 30+ FPS Preview and Alignment Latency
Given Guided Angle Alignment is running with ghost overlay enabled When tested over a continuous 60-second session in typical indoor lighting Then the camera preview sustains ≥30 FPS at P95 and ≥28 FPS at P99 And touch-to-preview visual response latency is ≤100 ms at P95 And each alignment update (transform estimation and overlay update) completes within ≤300 ms at P95 and ≤400 ms at P99 And no UI thread jank frames (>50 ms) exceed 1% of total frames on iPhone 12 (iOS 15+) and Pixel 6 (Android 12+) reference devices
Low-Light Enhancement Stability
Given ambient illuminance between 3–10 lux When Guided Angle Alignment is active with low-light enhancement engaged Then the overlay remains discernible with preview SNR ≥10 dB And the match score variance is within ±5 points during a 2-second steady hold (gyroscope variance below threshold) And the preview maintains ≥24 FPS at P95 And no crashes, watchdog terminations, or thermal warnings occur during a continuous 2-minute session on iPhone 12 (iOS 15+) and Pixel 6 (Android 12+) reference devices
Manual Nudge Controls for Edge Cases
Given auto-alignment does not reach the threshold (score <85) When the user employs manual nudge controls Then the user can adjust translation (X/Y), rotation, and scale of the overlay via drag/pinch/rotate gestures And minimum adjustment granularity is ≤1% of frame dimension for translation/scale and ≤1° for rotation And a Reset action restores auto-alignment state And in an edge-case test set of 30 scenes, manual nudging enables reaching score ≥85 in ≥80% of cases on iPhone 12 (iOS 15+) and Pixel 6 (Android 12+) reference devices
Data Capture, Storage, and Pipeline Availability
Given the agent captures an aligned photo when the match score ≥85 When the capture is saved Then EXIF includes device model, focal length, exposure time, ISO, and orientation And app metadata includes ISO-8601 timestamp, room ID, objection ID, and match score And both the original and the aligned variant are stored with linked IDs for auditability And the pair is available to the Before/After Proof pipeline within 10 seconds of capture and remains retrievable after app restart on iPhone 12 (iOS 15+) and Pixel 6 (Android 12+) reference devices
Before/After Pairing & Timeline
"As a listing agent, I want to attach new photos to the original objection and keep a timeline of fixes so that I can track progress and compare outcomes over time."
Description

Automatically pairs new photos or short clips to the originating objection record (room-level) and maintains a chronological timeline of attempts, notes, and outcomes. Supports multiple “after” iterations per objection, with tags (e.g., paint, staging, lighting) and responsible party. Enables quick browse of before/after pairs, swipe-to-compare, and zoom-ins. Ensures media and metadata are versioned, deduplicated, and linked to showings captured via QR-coded door hangers for traceability across TourEcho.

Acceptance Criteria
Auto-Pair New Media to Objection (Room-Level)
Given a listing with an existing room-level objection and a newly uploaded after photo or short clip (≤10s) tagged with the same room When the media is uploaded via mobile or web Then the system automatically pairs it to the originating objection in that room And logs a timeline entry with timestamp (UTC), uploader, source (mobile/web), and iteration number And the pairing completes within 5 seconds of upload completion And if no matching objection exists, the item is queued for manual pairing and the uploader is notified
Chronological Objection Timeline Logging
Given an objection with multiple events (media uploads, notes, outcomes) created at different times When the timeline is viewed Then events are displayed in strict chronological order by event timestamp (newest first) And each event shows type, author, timestamp to the second, tags, responsible party, and related iteration (if any) And pagination/lazy-load preserves order across pages And timestamps are stored in UTC and displayed per the viewer’s timezone preference
Multiple After Iterations with Tags and Responsible Party
Given an objection supports multiple after iterations When a new after iteration is added Then the user must select at least one tag from {paint, staging, lighting, cleaning, repairs, other} And the user must assign a responsible party from {seller, agent, contractor, stager, photographer, other} And the iteration can include 1–10 media assets And the system assigns a sequential iteration number starting at 1 and increments by 1 And the iteration summary (tags, responsible party, media count) appears on the timeline and compare view
Before/After Browse, Swipe-to-Compare, and Zoom
Given an objection has a before and at least one after media paired When the compare view is opened Then the system shows side-by-side comparison by default And supports a swipe-to-compare slider on touch and mouse devices And supports zoom up to 4x with synchronized pan between before and after assets And allows switching between after iterations via next/previous controls And the compare view loads in under 2 seconds for images up to 10 MB each on a 20 Mbps connection
Media and Metadata Versioning
Given an objection iteration exists When media or its metadata (tags, responsible party, captions, notes) is edited Then the system creates a new version with an incremented version number And retains previous versions read-only with the ability to restore any prior version And logs a version change entry on the timeline including who, when, and what fields changed
Media Deduplication on Upload
Given a user attempts to upload a media file whose content and dimensions match an existing asset within the same listing When the upload is processed Then the system detects the duplicate via content hashing (and ignores filename differences) And prevents storing a second binary, instead referencing the existing asset in the target iteration And notifies the user that a duplicate was referenced and not re-uploaded And records a deduplication event on the timeline
Traceability to QR-Coded Showings
Given an objection is linked to a showing captured via QR-coded door hanger When viewing the objection, any after iteration, or the compare view Then the UI displays the originating showing ID, date/time, and actor (buyer or buyer agent) And provides a link to the showing record And from the showing record, the user can navigate to the objection and all its before/after iterations And the showing–objection association is included in data exports and API responses
Objection Resolution Detection & Score Update
"As a broker-owner, I want the system to automatically reassess sentiment and room impact when fixes are verified so that pricing and relist decisions are backed by timely, objective evidence."
Description

Analyzes before/after pairs to detect visual changes related to the original objection (e.g., scuffs removed, clutter reduced, lighting improved) using CV/AI diffing, segmentation, and heuristics. Produces a resolved/unresolved likelihood and confidence, with human override and short rationale text. Automatically recalculates listing sentiment and room-level impact scores, attributing deltas to specific fixes and updating TourEcho’s analytics and agent readouts. Re-scoring completes within 5 seconds of media upload and preserves prior scores for comparison. Flags ambiguous cases for manual review.

Acceptance Criteria
Visual Change Detection for Original Objection
- Given a before and after photo pair is linked to the same listing, room, and objection ID, When the system completes alignment and diff analysis, Then it detects objection-relevant changes and labels fix types (e.g., scuff removal, decluttering, lighting) with a delta score 0–100. - And non-relevant visual changes do not contribute to the resolution likelihood. - And detection regions/masks are stored with the result for audit and troubleshooting.
Resolution Likelihood, Confidence, and Rationale Output
- Given analysis completes, When the API response is generated, Then it includes fields: resolved_likelihood [0.0–1.0], confidence [0.0–1.0], rationale text 20–300 characters referencing the objection and observed visual changes, and detection_metadata_id. - And numeric values are rounded to two decimals in readouts while preserving full precision in the API payload. - And rationale contains no PII and no model/version identifiers.
Auto Re-Scoring and Attribution within 5 Seconds
- Given a new after-photo for a tracked objection is uploaded, When processing starts, Then listing-level sentiment and room-level impact scores are recalculated within 5 seconds p95 from upload timestamp. - And agent readouts and progress cards show updated scores on the next refresh/push event. - And attribution lists each detected fix type and its percent contribution to the delta, summing to 100% ±1%. - And a processing record stores start/end timestamps and total duration.
Preserve Prior Scores and Show Comparison
- Given a re-score occurs, When scores are updated, Then previous listing sentiment and room impact scores are preserved with timestamp and version_id. - And agent readouts display before vs. after values with signed delta for the affected room and overall listing. - And the API supports querying the immediately prior snapshot via version_id.
Human Override Adjusts Resolution and Triggers Re-Score
- Given an authorized agent opens the override panel for an objection, When they set resolved/unresolved or input a custom resolved_likelihood (0.0–1.0) with a rationale (minimum 20 characters), Then the system saves the override with user_id, timestamp, and comment, marks it active, and recalculates scores within 5 seconds p95. - And agent readouts reflect the override status and updated scores with an "Overridden" badge. - And a revert-to-auto action restores model output and recalculates scores, clearing the override badge.
Ambiguity Flagging and Manual Review
- Given the model output is ambiguous (confidence < 0.5 or |resolved_likelihood − 0.5| ≤ 0.1), When the result is persisted, Then the objection is flagged "Needs Review" and added to the manual review queue. - And agent readouts display a "Needs Review" badge and link to the review action. - And after manual decision (approve/reject/adjust), the flag clears and scores update accordingly within 5 seconds p95.
Consistent Analytics and Readout Updates
- Given re-scoring completes, When analytics and readouts are updated, Then dashboard analytics, progress cards, and API endpoints expose identical sentiment and room impact values within a single refresh cycle. - And events are emitted once per re-score with idempotent ids (no duplicates). - And all updates succeed or none do for a single listing re-score (atomicity).
Seller Progress Cards
"As a listing agent, I want to send clear progress cards to my seller so that they understand improvements and why our price strategy remains justified."
Description

Generates seller-facing progress cards summarizing each fix with side-by-side visuals, plain-language summaries, date/time, responsible party, and measurable score deltas. Includes agent branding, listing details, and a simple “what changed / why it matters” section. Exports as shareable link and printer-friendly PDF, supports localization, accessibility (alt text, readable contrast), and email/SMS distribution via TourEcho. Tracks opens and engagement without exposing buyer identities or raw internal notes.

Acceptance Criteria
Generate Progress Card with Required Fields
Given a listing with at least one recorded fix and associated artifacts When an agent generates a Seller Progress Card Then the card includes, for each fix: side-by-side visuals (if available), a plain-language summary (<= 280 characters, Flesch-Kincaid grade <= 8), fix date/time (ISO 8601), responsible party, measurable sentiment and room-level score deltas, and a "What changed / Why it matters" section And the card includes agent branding (logo, name, contact), listing details (address, MLS ID), and a generated timestamp And fixes are ordered by most recent fix date descending And fields missing data are labeled "Not provided" without breaking layout
Before/After Visual Alignment and Labels
Given a fix has both before and after photos tagged to the same room/angle When the progress card is generated Then the before/after images are auto-aligned to within ±5° rotation and matched aspect ratio And each image is labeled "Before" and "After" with captions from metadata if present And the alignment status is shown as "Aligned" or "Manual review needed" And total image load time per fix is <= 1.0s on a 10 Mbps connection And if alignment fails, images are displayed unaltered with a notice and card generation proceeds
Score Delta Calculation and Sentiment Update
Given baseline sentiment and room objection scores exist for the listing When an after artifact is approved as resolving the objection Then the system recalculates sentiment and impact scores within 60 seconds And the progress card displays deltas with +/− signs, green for improvement and red for regression, meeting contrast ratio >= 4.5:1 And numeric deltas include previous and current values with units where applicable And updates are idempotent; re-generating the same card shows identical values And an audit entry records user, timestamp, and fields changed
Shareable Link and Printer-Friendly PDF
Given an agent requests a shareable link When the link is generated Then a tokenized, non-guessable URL (>= 128-bit entropy) is created with default expiry of 30 days (configurable) And the link renders a read-only web view identical to the in-app card And requesting a printer-friendly PDF produces an A4/Letter-optimized document with 0.5 in margins, embedded fonts, alt text, and file size <= 5 MB for up to 10 fixes And the PDF and web view content are content-identical (checksum on JSON payload) And revoked links return HTTP 410
Localization and Accessibility Compliance
Given the recipient's locale is set or inferred When the card is viewed Then all UI strings are localized; dates/times are formatted per locale; numbers use locale-specific separators And a fallback to en-US is applied for missing translations with a telemetry warning And all images include alt text; interactive elements are keyboard navigable; focus order is logical And color contrast meets WCAG 2.1 AA (>= 4.5:1); PDFs are tagged with language and reading order And viewport renders correctly on screens from 320 px to 1920 px without horizontal scrolling
Email/SMS Distribution and Privacy-Safe Engagement Tracking
Given an agent sends the card via TourEcho email/SMS to N recipients When messages are dispatched Then 95% of messages are delivered within 2 minutes; failures are retried up to 3 times with exponential backoff And recipients see the agent branding, listing details, and a preview thumbnail with a secure link And engagement tracking records unique opens, total opens, last open timestamp, and link clicks by channel without storing buyer identities or raw internal notes And analytics use tokenized recipient IDs; PII is not exposed in URLs; unsubscribe/opt-out is honored and logged And the agent can view aggregated engagement stats per send without recipient-level identities
Evidence Locker & Audit Trail
"As a broker-owner, I need an auditable, tamper-evident record of before/after proof so that I can defend price holds or relist decisions with confidence."
Description

Secures all before/after assets and metadata in a tamper-evident repository with content hashing, immutable timestamps, and user/action logs. Applies automatic redaction for faces/license plates to meet privacy and MLS guidelines, and watermarks evidence with listing ID and capture time. Supports RBAC for agents, sellers, and brokers; retention policies; and exportable evidence packets for price holds, negotiations, or disputes. Locks records post-approval while preserving a full change history for compliance.

Acceptance Criteria
Content Hashing & Immutable Timestamps
Given a new evidence asset (image or video) is uploaded When the system stores the asset Then it computes a SHA-256 content hash and persists it with the asset's metadata And the asset receives an immutable server-side UTC timestamp (ISO 8601, millisecond precision) And any subsequent modification creates a new version with a new hash and timestamp while preserving the prior version Given a stored asset is retrieved When its bytes are re-hashed server-side Then the computed hash matches the stored hash; otherwise the asset is flagged as corrupted and access is blocked with a 409 response and audit entry Given client device time is incorrect When an upload occurs Then the server-issued timestamp is used and cannot be overridden by the client
Automatic Redaction for Faces & License Plates
Given an uploaded asset contains human faces or vehicle license plates When processing completes Then the system stores the original and generates a redacted derivative And the redacted derivative is served by default to all roles And only users with Unredacted_View permission can request the original; all such access is logged Given automated redaction runs on a validation set When measuring performance Then face/plate detection achieves ≥90% recall and ≥95% precision, and redaction obscures ≥95% of detected regions (IoU coverage) with Gaussian blur σ ≥ 8 at 1080p equivalent Given an asset is marked Redaction Review Required due to processing failure When a user attempts export Then the asset is excluded from export until redaction succeeds, and the attempt is logged
Watermarking with Listing ID and Capture Time
Given any evidence asset is displayed in-app or exported When it is rendered Then a visible watermark overlays the Listing ID and capture timestamp (UTC) at the bottom-right with 35–50% opacity And watermark text matches the asset metadata exactly And all exports include watermarked assets by default Given an authorized user requests the original asset bytes When access is granted Then the stored original is returned without a burned-in watermark; only derived/previews/exports are watermarked Given a watermarked asset is included in an export When the export is validated against the manifest Then any watermark removal or cropping is detectable via hash mismatch
Role-Based Access Control (RBAC) Enforcement
Given a Seller user accesses a listing's evidence When requesting assets Then only redacted derivatives are viewable/downloadable; unredacted access, approvals, retention changes, and audit exports return 403 and are logged Given an Agent assigned to the listing accesses evidence When performing actions Then they can upload assets, view redacted, request unredacted if policy allows, request exports, and submit for approval; retention configuration changes return 403 and are logged Given a Broker for the office accesses evidence When performing actions Then they can view unredacted, approve locks, configure retention, manage role assignments, and export evidence Given any authorization decision is evaluated When a permission is denied Then the API returns 403 within <200ms at p95 and writes an audit log entry with user, action, resource, and timestamp
Retention Policies and Legal Hold
Given a Broker sets a retention policy for a listing (e.g., 24 months after close or 90 days after withdrawal) When the policy is saved Then the policy is recorded in metadata and visible via API/UI Given a listing reaches the retention end date with no legal hold When the purge window elapses (≤24 hours) Then all assets, derivatives, manifests, and search indexes are irreversibly deleted, and a purge event is added to the audit log Given a legal hold is applied to a listing When the retention end date is reached Then no deletion occurs until the hold is removed; hold add/remove actions are logged Given a deletion job runs When it completes Then a retention report is generated with counts of deleted items and any exceptions, available to Broker via download
Exportable Evidence Packet for Negotiations/Disputes
Given a permitted user requests an export for a listing When the export is generated Then the system produces a ZIP containing: (a) redacted, watermarked assets grouped by before/after sets; (b) manifest.json with filename, SHA-256 hash, capture timestamp, actor, and version ID; (c) audit_log.csv with user/action/timestamp; (d) readme.txt with verification instructions And manifest hashes match file bytes exactly And the export link is a signed URL expiring within 7 days and can be revoked by a Broker Given a permitted user explicitly requests inclusion of unredacted assets When policy allows Then an Unredacted/ subfolder is included; the request and download are logged with user and purpose note
Post-Approval Lock and Full Change History
Given an Agent submits a set of assets for approval and a Broker approves When approval is recorded Then the approved versions are locked read-only, and any subsequent edits create new versions without altering the approved ones And a prominent Locked status is returned by the API/UI for the approved versions Given a user attempts to modify a locked version When the API receives the request Then the API returns 423 Locked and writes an audit entry with user, action, resource, and timestamp Given the audit trail is queried for a listing When results are returned Then each entry includes immutable timestamp, user ID, role, action, resource ID, version IDs (before/after), and request origin (IP/client) And audit entries are append-only and verifiable against a periodic hash chain checkpoint
Approvals & Sharing Workflow
"As a listing agent, I want an approval workflow with my seller and broker so that we can align on next steps using the visual proof and documented outcomes."
Description

Provides a lightweight workflow for requesting feedback and approvals from sellers and brokers on each before/after pair or grouped fixes. Includes comment threads, @mentions, due dates, and one-tap Approve/Request Changes actions. Offers permissioned sharing to external stakeholders (e.g., staging vendor) without exposing buyer data. Integrates with TourEcho notifications and activity feeds, and records decisions in the audit trail. Supports exporting approved items into MLS docs or internal price-justification packets.

Acceptance Criteria
Approval Request and One-Tap Actions on Before/After Pair
Given an agent is viewing a saved before/after pair with complete title and photos And at least one seller or broker is linked to the listing When the agent clicks "Request Approval", selects recipients, and sets an optional due date Then the system creates an approval request with status Pending, records requester, recipients, due date, and a content snapshot hash And recipients see a mobile-friendly approval page with one-tap Approve and Request Changes actions When a recipient taps Approve Then the item status changes to Approved, the approver identity and UTC timestamp are recorded, and the content snapshot hash is locked And subsequent duplicate approvals are idempotent and do not create duplicate records When a recipient taps Request Changes without entering a comment Then the action is blocked and a comment is required When Request Changes is submitted with a comment Then the item status changes to Changes Requested, the due date persists unless updated, and the comment is pinned at the top of the thread And if two conflicting actions are submitted within 2 seconds, only the first write succeeds and the second receives a non-destructive Already updated message
Group Approval for Multiple Fixes
Given an agent multi-selects 2–50 before/after items on the listing When the agent initiates Request Group Approval and sets a group due date Then a single approval thread is created linking all items, with per-item statuses remaining independent And recipients can Approve All or open individual items to Approve or Request Changes When Approve All is confirmed Then all items that are eligible (complete media/metadata and no open change requests) are marked Approved and each approval is recorded individually And any ineligible items are listed with reasons and excluded without blocking eligible approvals And bulk status updates complete within 5 seconds for up to 50 items
Comment Threads with @Mentions and Due Dates
Given an approval thread exists for an item or group When a participant posts a comment containing @name or @email Then the system resolves mentions to existing users or invites guests, converts mentions to chips, and adds them as followers And mentioned users receive notifications according to their preferences When the requester or a listing admin sets or edits a due date Then the new due date is saved, a system comment noting the change is added, and previous due dates are retained in history And non-owners attempting to change the due date receive a permission error and no change is persisted
Permissioned External Sharing Without Buyer Data Exposure
Given an agent clicks Share Externally on an approval thread When recipient email(s) are entered, permission is set to Comment-only, and link expiry is set to 7 days Then a magic-link invite is sent and a restricted external view is created And the external view shows only: before/after photos, fix titles, original objection text, resolution notes, current status, approver identity (role + first initial), and due date And the external view omits: buyer names, buyer agent info, showing times, QR scan logs, private agent notes, and raw sentiment scores And access is revoked after expiry or manual revocation, returning a 403 page for the link And external users cannot access items beyond the shared scope or view internal user profiles
Notifications and Activity Feed Integration
Given an approval request is created, a comment is posted, a mention occurs, a decision is made, or a due date is changed When any such event occurs Then an in-app notification is generated within 5 seconds and an activity feed entry is created with a deep link to the item or thread And an email (or push) notification is sent unless the recipient has opted out of that channel And notifications for the same event type are deduplicated within a 10-minute window to prevent spam And notification preference settings (channel and frequency) are respected for each user
Audit Trail and Immutable Decision Log
Given any action occurs in the approvals workflow (request created, decision taken, comment posted, mention added, due date changed, share created/revoked, export generated) Then an audit record is appended capturing: actor id, actor role, action type, target id(s), previous state, new state, UTC timestamp, IP, user agent, and a record checksum And audit records are append-only and not editable or deletable by end users And the audit log is filterable by listing, item, actor, action type, and date range, and can be exported as CSV And content snapshot hashes are stored for Approved states to ensure evidence integrity
Export Approved Items to MLS Docs and Price-Justification Packet
Given at least one item is in Approved state When the agent selects items and chooses Export > MLS or Export > Price Packet Then a PDF is generated containing per item: before photo, after photo, capture dates, original objection text, resolution notes, approval status, approver name and role, approval UTC timestamp, and listing address And the export excludes all buyer-identifying data and showing details And the PDF filename follows the pattern: {ListingAddress}_{ExportType}_{YYYY-MM-DD}.pdf And the export completes within 10 seconds for up to 25 items, is stored with a shareable link, and an audit entry is recorded for the export
Offline Capture & Deferred Sync
"As a field agent with spotty connectivity, I want to capture proof offline and have it sync later so that I never miss documenting a completed fix."
Description

Enables agents to capture guided before/after shots with alignment overlay while offline, caching media and metadata securely on-device. Provides clear sync state, conflict resolution (e.g., duplicate uploads), background upload with retries, and bandwidth-aware compression. Ensures encryption at rest, battery-efficient operation, and graceful handling of partial uploads. Automatically links synced media to the correct objection record and triggers scoring and progress card generation once online.

Acceptance Criteria
Offline guided before/after capture with alignment overlay
Given the device has no internet connectivity And the agent selects an existing objection record to capture updates for When the agent initiates camera capture with alignment overlay enabled Then the last synced reference image is shown as an alignment overlay And the agent can capture at least one “After” photo aligned to the reference And the capture is saved locally with metadata (objectionId, captureType, timestamp, device orientation, and location when permitted) And an offline badge is shown on the item in the queue And the agent can review, retake, or delete the local capture before sync
Encrypted at-rest media and metadata on device
Given the app has cached media and metadata locally When a tester inspects device storage outside the app Then no media is present in shared galleries or external storage And files reside only in app-private storage encrypted with the OS keystore And metadata persisted locally is encrypted at rest When the user logs out Then all cached media and metadata are securely purged within 60 seconds And relaunching the app shows an empty offline queue When the app is uninstalled Then no media or metadata remains on the device
Deferred sync with clear statuses and background retries
Given there are offline-captured items pending upload And connectivity becomes available When background sync starts Then each item displays a status: Pending → Syncing (with progress %) → Synced And a global sync indicator shows remaining items and aggregate progress When connectivity drops or battery is below 20% Then syncing pauses with status Paused and a reason (No Connection or Low Battery) When errors occur (e.g., 4xx/5xx) Then the item status becomes Failed with an actionable message and a Retry control And automatic retries use exponential backoff up to 5 attempts per item And sync resumes automatically when constraints are cleared And the app continues syncing while in background subject to OS limits
Duplicate upload detection and resolution
Given an item being uploaded matches an existing server asset by content hash or capture fingerprint (objectionId + captureType + timestamp window) When the server signals a potential duplicate Then the agent is prompted with Replace, Keep Both, or Skip options And a preview is shown to aid decision When Replace is chosen Then the old asset is archived and the new asset assumes its links When Keep Both is chosen Then both assets are retained with distinct IDs and chronological ordering When Skip is chosen Then no new asset is created and the local item is marked Synced (Skipped) And in all cases the operation is idempotent and does not create duplicate objection records or double-count analytics
Adaptive compression based on network and user settings
Given the setting “Upload originals on Wi‑Fi only” is enabled When on cellular Then items remain Pending and show a note “Waiting for Wi‑Fi” Given compression is required for upload When on cellular Then images are compressed to ≤ 1.5 MB with max dimension 1920px and JPEG quality ≥ 80% And on Wi‑Fi images are compressed to ≤ 5 MB with max dimension 3840px and JPEG quality ≥ 85% And EXIF fields needed for alignment (orientation, focal length) are preserved And a per-item estimated data size is shown before upload And re-compression occurs at most once per item
Graceful handling of partial uploads and resume
Given uploads use chunked transfer with integrity checks When connectivity is interrupted mid-upload Then the next attempt resumes from the last acknowledged chunk without re-sending prior chunks And the server validates final object integrity via checksum And exactly one asset ID is created per item upon completion When the app is force-closed or the device reboots during upload Then on relaunch the item returns to Syncing and resumes within 10 seconds When max retry attempts are exhausted Then the item is marked Failed and provides a Retry control that resets the backoff
Auto-linking to objection records and post-sync automations
Given an offline-captured item has an objectionId and captureType (Before/After) When the item reaches Synced state Then it is attached to the correct objection record and visible in the side-by-side view And alignment is preserved between Before and After images And AI scoring and sentiment updates are triggered automatically And a progress card is generated within 2 minutes of sync completion And the agent receives an in-app notification when the analysis is done When the same media is re-sent due to retry Then scoring is not duplicated and the operation is idempotent When the scoring service is temporarily unavailable Then the item shows Analysis Pending and the system retries for up to 24 hours with backoff

Offline Capture

Capture photos and emoji ratings without reception—perfect for basements or crowded open houses. Data is time-stamped, securely cached on device, and syncs once online, ensuring no feedback is lost and agents keep momentum even in dead zones.

Requirements

Offline Data Cache & Sync Queue
"As a showing visitor, I want my feedback to be safely saved on my device when there’s no signal so that nothing is lost and it can send automatically later."
Description

Implement a robust, transactional on‑device cache that stores feedback events (emoji ratings, notes, photos, timestamps, listing_id/visit_id/room_id, device_id) while offline and enqueues them for delivery. The queue must preserve capture order, support atomic writes, handle partial failures, and mark items with client‑generated UUIDs for idempotent server upserts. Include schema versioning to match TourEcho’s existing feedback and media API models, persistent retry state, and storage usage tracking. The cache must survive app restarts, prevent duplicate submissions, and expose a lightweight service interface to the capture UI and sync engine.

Acceptance Criteria
Offline Capture: Atomic Enqueue and Order Preservation
- Given the device is offline and the capture UI submits an emoji rating, note, and optional photo for a specific listing_id, visit_id, room_id, and current device_id, When the UI calls enqueueFeedback, Then the cache writes a single record atomically containing client_uuid, created_at (client timestamp), listing_id, visit_id, room_id, device_id, emoji_rating, note_text, photo_local_uri (if any), and schema_version. - And the record is appended to the queue tail preserving capture order by created_at and enqueue sequence. - And if the app crashes or power is lost immediately after the write, Then on next launch the record is either fully present once or absent (no partial or duplicate record). - And photo_local_uri (if present) resolves to a readable file in app-private storage while offline.
App Restart: Persisted Cache and Resume Sync
- Given unsent queued records exist, When the app is force-closed and relaunched, Then the cache still contains the same number of records with the same client_uuid values and the same order. - And per-record retry metadata (attempt_count, next_retry_at, last_error_code) persists across restarts. - When connectivity becomes available, Then sync resumes automatically from the queue head and continues until all items are acknowledged or waiting on backoff. - And records acknowledged by the server prior to the crash/restart are not re-sent after restart.
Idempotent Upserts: Duplicate Submission Prevention
- Given each queued record has a client-generated UUID, When the sync engine sends the payload to the server and receives a success (200/201) or idempotent-duplicate response (e.g., 200/409), Then the record is marked complete, removed from the queue, and will not be sent again. - And if the client retries the same record due to timeout or network error, Then at most one server-side feedback/media object exists for that client_uuid. - And all outbound requests include client_uuid so that repeated deliveries are idempotent on the server.
Partial Failure Handling: Retry with Backoff and Progress
- Given a sync attempt where one record returns a transient error (HTTP 429 or 5xx), When handling the error, Then the record's attempt_count increments, next_retry_at is scheduled using exponential backoff and persisted, and the record remains at its queue position. - And until next_retry_at, the engine does not attempt that record again and does not send subsequent records ahead of it (order preserved). - Given a non-retryable error (HTTP 4xx excluding 409), When received, Then the record is marked failed with last_error_code and moved to a quarantined state that no longer blocks subsequent records; no further attempts are made unless manually retried via the service interface.
Storage Usage Tracking: Thresholds and Safeguards
- Given the cache contains N records and M photo files, When getStorageUsage() is called, Then it returns used_bytes, quota_bytes (if known), and item_counts, where used_bytes reflects the sum of stored metadata and media sizes within +/- 5%. - When a new photo of size S is enqueued, Then used_bytes increases by approximately S and is observable via getStorageUsage() without restarting the app. - Given the device reports insufficient disk space for a new photo, When enqueueFeedback is called with a photo, Then the call fails with an INSUFFICIENT_STORAGE error and no partial record or file is written.
Schema Versioning and Migration Across App Updates
- Given the app is updated from schema_version v1 to v2, When it starts with existing cached data, Then a one-time migration runs and all records and media references remain valid with no data loss or duplication. - And for each queued record, the outbound payload maps exactly to the current feedback/media API models (field names and types), including client_uuid, timestamps, IDs, and media attachment references. - And if an API contract/version incompatibility is detected, Then syncing is halted, affected records are not mutated, and an error is surfaced via the service interface.
Service Interface: UI and Sync Engine Contract
- Given the capture UI calls enqueueFeedback(eventPayload), When provided with valid fields (listing_id, visit_id, room_id, device_id, and at least one of emoji_rating, note_text, photo), Then the service returns success with assigned client_uuid and queue_position; on invalid payload it returns validation errors without side effects. - When getQueueState() is called, Then it returns head/tail positions, counts of queued/syncing/failed items, and the ordered list of client_uuid with per-item status. - When startSync() is called while online, Then syncing begins immediately; when pauseSync() is called, Then no new network requests are issued but the queue remains intact. - When onSyncStatusChanged(callback) is registered, Then the callback fires on item_enqueued, item_acknowledged, item_failed, backoff_scheduled, and storage_usage_updated events.
Offline QR Decode & Listing Association
"As a buyer at the door, I want to scan the hanger’s QR and start the feedback flow even without reception so that I can record impressions immediately."
Description

Embed an in‑app QR scanner that works without network, decodes TourEcho QR payloads locally, and immediately initializes a feedback session bound to listing_id/visit_id and room map. If the QR lacks full metadata, create a placeholder session with the decoded identifiers and defer metadata hydration until online. Support cold‑start performance, camera permission prompts, duplicate‑scan protection, and deep‑linking from OS scan intents into the offline capture flow.

Acceptance Criteria
Scan Full-Payload QR Offline
Given the device has no internet connectivity And the in-app QR scanner is open When a TourEcho QR containing listing_id, visit_id, and room_map is scanned Then the payload is decoded entirely on-device without issuing any network requests And a feedback session is created and bound to the decoded listing_id, visit_id, and room_map And the session becomes usable within 1.0 seconds from successful decode on reference devices (p90) And an Offline indicator is visible within the session And a local ISO 8601 timestamp is recorded for session initialization
Scan Partial-Payload QR Offline (Placeholder Session)
Given the device is offline and the in-app QR scanner is open When a TourEcho QR with listing_id and visit_id but missing room_map is scanned Then a placeholder feedback session is created bound to listing_id and visit_id And missing fields are marked as Pending metadata And the user can immediately capture photos and emoji ratings without blocking prompts And upon first detection of internet connectivity, the app requests missing metadata and hydrates the session within 5 seconds (p90) And if hydration succeeds, the room_map appears without altering any user-captured inputs And if hydration fails or identifiers are not found, the session remains usable and a non-blocking error notice is shown, with retry intervals no more frequent than every 30 seconds
Duplicate-Scan Protection and Session Resume
Given an active feedback session exists for a specific listing_id and visit_id created within the last 15 minutes on the device When the same TourEcho QR is scanned again via the in-app scanner or a deep link Then no new session is created And the existing session is resumed and surfaced to the user within 500 ms And an idempotency key composed of listing_id + visit_id prevents duplicate local records And a one-time toast Resumed existing session is displayed
Camera Permission Handling
Given camera permission status is Undetermined When the user taps Scan QR Then the OS permission prompt is presented exactly once And if the user grants permission, the scanner viewport becomes active within 800 ms (p90) And if the user denies permission, the app displays inline guidance with a one-tap Open Settings action; no crash or dead end occurs And if permission is later granted from Settings, returning to the app activates the scanner without requiring an app restart
Cold-Start Scanner Performance
Given the app is not running (cold start) When the user triggers Scan QR via app icon shortcut, in-app button, or deep link Then the camera preview is visible within 1200 ms (p90) and 1800 ms (p95) on reference devices And the first successful decode occurs within 500 ms (p90) from first preview frame for a standard TourEcho QR at 15–30 cm distance under 200–500 lux lighting
Deep-Link From OS Camera Scan (Offline)
Given the user scans a TourEcho QR using the device OS camera or a third-party scanner that invokes the app with the raw QR payload And the device has no internet connectivity When the app is opened via the deep link Then the payload is parsed locally and the app navigates directly into the offline capture flow And if an active session for the same listing_id and visit_id exists, it is resumed; otherwise a new session is created And no network requests are attempted until connectivity is available
QR Payload Validation & Safety
Given any QR code is presented to the in-app scanner or received via deep link When the payload does not match the TourEcho QR format or expected signature Then the app rejects it, does not create a session, and shows a non-blocking Unrecognized QR message And when the payload is valid, checksums or signatures (if present) are verified offline prior to session creation And no PII beyond listing_id, visit_id, and room_map is persisted locally from the QR payload
Offline Ratings & Notes Capture
"As a visitor, I want to quickly tap emoji ratings and jot notes offline so that I can capture accurate room‑level feedback on the spot."
Description

Provide a low‑latency offline UI to capture room‑level emoji ratings and short notes. Validate required fields locally, allow edits and deletions until an item is synced, and timestamp each interaction. Persist draft state per room, support accessibility and fast tap interactions in crowded open houses, and serialize data into the cache in the same structure expected by TourEcho’s summarization pipeline.

Acceptance Criteria
Airplane Mode: Low‑Latency Offline Room Rating & Note Capture
Given the device has no network connectivity (airplane mode) When the user selects a room and taps an emoji rating Then the UI shows the selected state and haptic feedback within 100ms at P90 and 150ms at P95 on target devices, no network request is attempted, and a draft feedback record is written to the offline cache Given the device has no network connectivity When the user enters a short note and taps Save Then the note is stored with the draft feedback record offline, the acknowledgment renders within 150ms at P95, and no network request is attempted
Local Validation: Required Fields and Note Length
Given a room context is selected When the user attempts to save with neither rating nor note provided Then the Save action is blocked locally, an inline error is shown, and the error is announced via accessibility services Given a room context is selected When the user provides a note longer than 280 characters and taps Save Then the Save action is blocked locally and a remaining-characters indicator guides input until the note length is 280 characters or fewer Given a room context is selected When the user provides a rating without a note Then the draft saves offline without error Given a room context is selected When the user provides a note (≤ 280 chars) without a rating Then the draft saves offline without error Then every saved draft includes non-null showingId, roomId, and localId (UUIDv4), and at least one of {rating, note}; rating, if present, is one of the allowed emoji set; all validation occurs client-side without network calls
Draft Persistence Per Room and App Relaunch
Given an unsynced draft exists for Room A When the user navigates to Room B and returns to Room A Then the draft for Room A is restored exactly as last edited (rating selection and note text) Given an unsynced draft exists When the app is force-closed and relaunched within the same showing Then all unsynced drafts reappear per room without data loss Given drafts exist for multiple rooms When the device remains offline for at least 24 hours Then drafts remain available in the cache and are not purged until synced or explicitly deleted by the user
Editable Until Sync; Lock After Sync
Given a draft is unsynced When the user edits the rating or note and taps Save Then the changes overwrite the local draft, and updatedAt is refreshed Given a draft is unsynced When the user deletes it Then the record is removed from the cache and no longer appears in the room Given a draft has synchronized successfully When the user attempts to edit or delete it Then editing and deletion controls are disabled or an inline message indicates the item is locked because it has been synced Given a draft is queued for sync but not yet confirmed When the user edits it Then the outgoing queue updates to include only the latest version and only that version is sent to the server
Interaction Timestamping
Given a new draft is saved Then createdAt is stored as an ISO-8601 UTC timestamp and updatedAt equals createdAt Given a draft is edited Then updatedAt updates to the current timestamp and is greater than or equal to createdAt Given a rating is tapped Then the tap event time is recorded and associated with the draft entry When the draft syncs Then serverReceivedAt is added to the record metadata while preserving original createdAt and updatedAt
Accessibility and Fast Tap Targets
Given the rating UI is displayed Then each emoji button has an accessible name describing the sentiment (e.g., Very satisfied, Satisfied, Neutral, Dissatisfied, Very dissatisfied), a minimum 44x44 pt touch target, and 4.5:1 contrast for any essential icon or label Given a validation error occurs Then the error is announced via VoiceOver/TalkBack and is programmatically associated with the offending field Given the user navigates the rating controls with a screen reader Then focus order follows a logical sequence and activating via accessibility action applies the rating with acknowledgment within 150ms at P95
Serialization, Secure Cache, and Auto‑Sync on Reconnection
Given a draft is saved offline Then the serialized JSON for the record validates against the TourEcho summarization pipeline schema with zero validation errors and includes fields: localId (UUIDv4), showingId, roomId, rating (enum if present), note (≤ 280 chars if present), createdAt, updatedAt, status (draft|queued|synced), and deviceId Given the draft is stored on device Then its contents are encrypted at rest using the platform’s secure storage and are not readable as plaintext via a file inspection test Given network connectivity is restored When auto-sync triggers Then unsynced items are sent in ascending createdAt order within 5 seconds at P95, using localId for idempotency, and the server receives no duplicate records under intermittent connectivity Given the server acknowledges an item with HTTP 200 Then the client marks the item as synced, locks it from further edits, removes it from the outgoing queue, and on retryable errors the client retries with exponential backoff up to a configurable cap without user intervention
Offline Photo Capture & Compression
"As a visitor, I want to take photos for each room without signal so that my visual notes are saved and uploaded when I’m back online."
Description

Enable camera capture and gallery selection offline with local, secure storage of originals and generation of compressed upload renditions and thumbnails. Handle EXIF orientation, target size/quality profiles per network type, and low‑storage conditions with user prompts and graceful degradation. Associate photos to rooms and sessions, allow delete/replace before sync, and enqueue media for chunked upload aligned with TourEcho’s media API.

Acceptance Criteria
Offline Camera Capture Without Connectivity
Given the device has no internet connectivity When the user opens a room in an active session and taps Capture Then the camera opens and a photo can be captured successfully without any network calls And the captured photo is saved as an original file in app-private storage And a client-side timestamp is recorded in ISO 8601 with timezone offset And the photo appears in the room gallery with an Offline badge and status Pending Upload And no sync attempts or error toasts are triggered while the device remains offline
Offline Gallery Selection and Import
Given the device has no internet connectivity And the user is in a room within an active session When the user selects a photo via the OS gallery picker Then a copy of the selected photo is imported into app-private storage as the original And the photo is associated to the current room and session And the item appears in the room gallery with an Offline badge and status Pending Upload And the app does not alter the source image in the device gallery
Secure Local Storage of Originals and Derivatives
Given a photo has been captured or imported while offline When the app writes the original and any derivatives to disk Then the files are stored under app-private storage paths not visible in the device Photos/Gallery And files at rest are encrypted using platform-supported encryption And file reads outside the app sandbox are not permitted And plaintext originals or derivatives are not written to shared/external storage at any time
Rendition and Thumbnail Generation With EXIF Orientation
Given target profiles are configured as: Wi‑Fi rendition longest edge=2048px, JPEG quality=85; Cellular rendition longest edge=1280px, JPEG quality=75; Thumbnail longest edge=256px, max size=50KB And an original photo contains any EXIF orientation value (0, 1, 3, 6, 8, mirrored variants) When the app generates the upload rendition and thumbnail offline Then both outputs are visually upright and match the original scene orientation And longest edge and JPEG quality match the configured profile values And the thumbnail file size does not exceed the configured max And the rendition’s EXIF orientation is normalized so that downstream displays do not double-rotate And photos without EXIF orientation remain correctly oriented (no unintended rotation)
Low-Storage Handling and Graceful Degradation
Given the device free space drops below the configured low-storage threshold (e.g., 150MB) When the user attempts to capture or import a photo Then the app displays a low-storage prompt with options to cancel or proceed And if the user proceeds, the original is saved and derivative generation is deferred with status Awaiting Space And once free space rises above the threshold, deferred derivatives are generated automatically And if free space is below the minimum required to persist an original (e.g., 20MB), capture/import is blocked with a clear error message and no partial files are written
Room and Session Association With Pre-Sync Edit Controls
Given the user captures or imports photos while offline within a specific room and session When the app persists the media locally Then each photo is associated to the correct roomId and sessionId and persists across app restarts And the room gallery orders items by client capture/import timestamp And before sync, the user can delete an item, which removes the original and any derivatives from local storage and the upload queue And before sync, the user can replace an item, which substitutes the original and derivatives and maintains the same room/session association and queue position
Chunked Upload Queue, Resume, and Network Profile Targeting
Given photos are queued while offline And chunking is configured with chunkSize=5MB and maxConcurrency=2 When connectivity is restored on Wi‑Fi Then the app selects or generates the Wi‑Fi profile rendition and enqueues chunks FIFO by client timestamp And each chunk is sent with content hash and ordering index, and server 2xx responses advance the cursor And transient failures (network/5xx) use exponential backoff with jitter and resume from the last acknowledged chunk And client errors (4xx) mark the item Failed with a visible reason and allow manual Retry And if connectivity is restored on Cellular, the app selects or generates the Cellular profile rendition before upload And if network type switches mid-item, the current item completes with its started profile, and subsequent items use the new profile And upon success, the item status becomes Synced, the Offline badge is removed, and server mediaId is stored with room/session association intact
On‑Device Encryption & Secure Storage
"As a privacy‑conscious visitor, I need my offline feedback and photos to be stored securely so that my data is protected if my phone is lost or shared."
Description

Encrypt all cached feedback and media at rest using OS keystore‑protected keys (e.g., AES‑256 with per‑app keys), and secure references to media files. Enforce secure wipes on logout/session expiry, restrict access to other apps, and minimize PII stored offline. Ensure keys can rotate via app updates, and that crash recovery never exposes plaintext. Document data retention, comply with platform policies, and integrate with TourEcho’s security posture.

Acceptance Criteria
Encrypt Cached Feedback and Media at Rest
Given the app must cache feedback or media offline When any feedback record or image/video is written to storage Then it is encrypted at rest using AES-256-GCM with a keystore/hardware-backed per-app master key And each object uses a unique random IV and a per-object data key wrapped by the master key And no plaintext bytes are ever written to disk, including temp files And attempts to read cached files using external tools yield no readable plaintext and show high-entropy content
Key Management and Rotation on App Update
Given a new app version enables key rotation When the app launches post-update Then a new keystore-protected master key is generated and marked active And all existing cached items are re-encrypted with the new key within defined background limits (e.g., ≥500 MB/hour) without user impact And old keys are securely destroyed after successful migration And migration progress persists so it resumes safely after interruption without data loss And an audit entry (timestamp, app version, counts) is recorded without PII
Secure Wipe on Logout and Session Expiry
Given a user logs out or the session expires When the logout/expiry action is confirmed Then all encrypted cache files, temp files, thumbnails, indices, and key material references are securely deleted And keystore aliases for data keys are invalidated so recovered files remain unusable And wipe completes within 5 seconds per 1,000 items or continues in blocking mode until finished before allowing re-login And power loss during wipe results in automatic resume to completion on next launch And post-wipe forensic scans cannot recover plaintext or valid keys
Crash Safety and No Plaintext Exposure
Given the app crashes during offline capture or sync When inspecting crash logs, temp directories, and OS reports Then no plaintext feedback, media, or keys are present in files or logs And any persisted temp artifacts are encrypted and have file protection enabled And on next launch, the app detects and securely deletes orphaned temp files before rendering UI
Access Restriction and Scoped Storage Compliance
Given Android devices (API 29+) When caching offline data Then files reside only in app-internal storage with MODE_PRIVATE and permissions 600, never on shared/external storage And no ContentProvider exposes offline media URIs to other apps Given iOS devices When caching offline data Then files are stored only within the app sandbox with NSFileProtectionComplete (or stronger) enabled And the local database stores only opaque content IDs, not absolute file paths or public URIs
Minimize Offline PII
Given offline capture collects photos and emoji ratings When data is cached Then only listingId, a rotating non-trackable deviceId, timestamps, and content hashes are stored offline And names, emails, phone numbers, precise location, and free-text comments are not written to disk Given a user enters free-text while offline When persistence is required Then text is encrypted at rest under the same scheme and redacted of detected PII before storage, or else held in-memory only until online
Data Retention, Post-Sync Purge, and Policy Alignment
Given the device comes online and sync succeeds When the server acknowledges receipt Then corresponding cached files and indices are securely deleted within 60 seconds and verification confirms zero residual plaintext Given items remain unsynced beyond the retention threshold (e.g., 14 days, configurable) When the threshold is reached Then the app prompts the user and purges per policy while preserving explicitly marked critical items And platform policy disclosures (Google Play Data Safety, App Store privacy) accurately reflect offline encryption and retention behavior, with internal checklist passing before release
Connectivity Detector & Background Sync
"As a visitor, I want my feedback to sync automatically when I’m back online so that I don’t have to remember to submit anything."
Description

Detect connectivity changes and battery/state constraints to trigger background sync of queued feedback and media with exponential backoff and resumable uploads. Support OS background task APIs, pause/resume on app lifecycle events, and provide lightweight callbacks to update the capture flow when items successfully sync. Ensure reliability across intermittent networks and preserve order by captured_at timestamps.

Acceptance Criteria
Detect Online/Offline Transitions
Given the device is online and there are zero queued items When connectivity is lost for at least 3 seconds Then the app marks network_state=offline within 1 second and routes all new captures to the local queue without attempting upload Given the device is offline with one or more queued items When connectivity becomes available (Wi‑Fi or cellular) and battery_saver=false or the device is charging Then background sync starts within 2 seconds and processes the queue Given connectivity flaps online↔offline within a 2-second window When the detector receives multiple state changes Then sync start/stop is debounced so no more than 1 transition occurs within any 5-second window
Queue and Timestamp Offline Captures
Given a photo or emoji rating is captured while network_state=offline When the capture is saved Then an item is written to persistent storage with fields [id, captured_at (ISO‑8601 UTC), type, size_bytes, checksum] and marked queued Given multiple captures occur offline at times t1 < t2 < t3 When items are enqueued Then their queue order is sorted by captured_at ascending and remains stable across app restarts Given the device clock changes after capture When syncing later Then the original captured_at is preserved and used for ordering
Exponential Backoff and Retry Policy
Given an upload attempt fails due to transport error or HTTP 5xx When scheduling the next retry Then the delay uses exponential backoff with initial_delay=2s, multiplier=2x, max_delay=5m, and ±20% jitter Given consecutive failures continue beyond the cap When subsequent retries are scheduled Then delays remain at max_delay with jitter until success or app uninstall Given a retry is scheduled When the device goes offline Then the timer pauses and resumes when online without losing backoff state
Resumable Media Uploads
Given a media file is uploading in 2 MB chunks and connectivity drops after N chunks are acknowledged When connectivity returns Then the upload resumes from chunk N+1 without re-sending acknowledged chunks Given the server supports range/offset discovery When the client cannot confirm the last acknowledged chunk Then the client performs a range probe and resumes from the returned offset Given the upload completes When the server responds with success Then the client verifies full-byte-count match to the original file and marks the item synced
Background Task Support and Persistence
Given the app is backgrounded with queued items and the OS grants a background execution window When sync runs Then the app uses the OS background task API, continues uploading until expiration, and requests a subsequent window if items remain Given the OS terminates the app during background sync When the app is relaunched or the next background window is granted Then the queue and partial upload state are restored from disk and syncing resumes within 3 seconds Given no background window is available When the user reopens the app Then syncing resumes immediately subject to battery and network constraints
Lifecycle Pause/Resume Semantics
Given an active upload in foreground When the app moves to background and battery_saver=true and the device is not charging Then the upload pauses within 1 second and the current progress is checkpointed Given uploads are paused due to lifecycle or power constraints When constraints are removed (app foregrounded or charging or battery_saver=false) Then uploads resume within 2 seconds from the last checkpoint
Ordered Delivery and Callback Updates
Given three queued items A, B, C with captured_at A < B < C When syncing over intermittent networks Then items are committed server-side in non-decreasing captured_at order by uploading at most one item at a time Given an item is committed on the server When the client receives the success response Then the client emits onItemSynced(item_id, captured_at, server_id) within 1 second Given the capture flow is in the foreground When onItemSynced is emitted Then the corresponding item UI transitions to Synced state within 1 second
Sync & Conflict Resolution with Timestamps
"As a listing agent, I want feedback to remain consistent even if multiple devices or sessions submit changes so that summaries and objections are trustworthy."
Description

Implement idempotent client‑to‑server upserts using client UUIDs and server‑assigned versions. Order processing by captured_at and reconcile client vs. server timestamps using server clock on first contact. Define conflict policies (e.g., last‑write‑wins for text/ratings, additive for photos) and deduplication rules. Handle 4xx/5xx errors with clear, retryable states and escalate irrecoverable items for support tooling, ensuring the agent’s aggregate view remains consistent.

Acceptance Criteria
Idempotent Client Upserts
Given an offline feedback item with client_uuid X and local_version 1 exists on the device When the client submits the item to the server and transient retries cause the same payload to be sent multiple times Then the server performs a single upsert for client_uuid X, returning resource_id and server_version 1 And subsequent identical upserts for client_uuid X do not create duplicates and do not increment server_version And the client stores server_version 1 and marks the item state as synced
Server Clock Reconciliation and Ordering by captured_at
Given the device clock is 5 minutes behind the server clock And two items A and B are captured offline with captured_at T1 and T2 where T1 < T2 When the device first contacts the server and the server computes and applies the clock offset for this client Then the server orders and persists A before B by effective_timestamp (captured_at adjusted by server offset) And list APIs ordered by captured_at return A then B And the first-contact item’s effective_timestamp is aligned to server time within ±100ms
Last-Write-Wins for Text and Ratings
Given two devices edit the same feedback’s note and rating offline producing updates U1 and U2 with effective_timestamp 10:00:00 and 10:01:00 When both updates are synced to the server Then the server persists the note and rating from U2 per last-write-wins (highest effective_timestamp) And the superseded values from U1 are recorded in audit history with their server_version And the final persisted record’s server_version is incremented exactly once for the winning write
Additive Photo Merge with Deduplication
Given a feedback item has photos captured on multiple devices while offline When devices sync photos to the server Then the server merges photos additively and deduplicates by content hash (e.g., SHA-256), preserving captured_at order And re-uploading a photo with an existing hash returns 200 with no new photo created And aggregate photo counts reflect the number of unique hashes
Error Handling, Retries, and Escalation
Given the client attempts to sync an item and receives HTTP 503 When the client retries with exponential backoff starting at 2s, doubling up to a 60s max interval, for up to 7 attempts Then the item state transitions queued_offline -> syncing -> retry_scheduled between attempts and remains retryable And upon a successful sync (HTTP 200), the state becomes synced and retry metadata is cleared And for a non-retryable 4xx (e.g., 422 validation error), the item enters needs_support with error_code and payload snapshot available to support tooling
Consistent Agent Aggregate View Post-Sync
Given two devices submit overlapping offline updates and photos for the same feedback When both devices complete sync and pull the latest from the server Then both devices display identical aggregates: deduplicated photo count, latest note and rating per feedback, and a timeline sorted by effective_timestamp And no duplicate feedback items appear for the same client_uuid And aggregate metrics (e.g., average rating) exactly match server-calculated values

MLS Roster Match

Instantly validates a visitor by matching their MLS ID, name, and brokerage against trusted rosters. Auto-applies a Verified Agent badge and populates firm details without accounts. Agents trust the feedback, coordinators cut spam, and admins get clean, licensed attribution on every submission.

Requirements

Multi-MLS Roster Ingestion & Sync
"As a TourEcho admin, I want MLS rosters ingested and kept current across markets so that agent verifications are accurate without manual uploads."
Description

Implement connectors to multiple MLS sources (RESO Web API/RETS, SFTP/CSV uploads) with scheduled and on-demand sync. Normalize schemas (agent, license, brokerage, status), deduplicate agents across MLSes, and maintain a canonical roster with source provenance. Support delta updates, webhook/change data capture where available, retries/backoff, and alerting on failures. Expose a health dashboard and SLA metrics. Persist MLS ID, license number, agent name, brokerage/legal entity, office IDs, and status (active/inactive). Integrate with TourEcho’s data layer for low-latency reads by downstream services.

Acceptance Criteria
Nightly RESO API Delta Sync to Canonical Roster
Given a configured RESO Web API source with last_successful_sync timestamp T When the scheduler triggers a nightly sync at 02:00 local time Then the connector requests only records changed since T using server-side delta filters or timestamps And paginates until completion with HTTP 2xx responses and aggregate throughput ≥ 5k records/min And upserts normalized fields into the canonical roster: source_mls_id, license_number, full_name, brokerage_legal_name, office_ids, status ∈ {active,inactive}, source_provenance (source, source_agent_id, fetched_at) And removed/terminated source records are mapped to status=inactive (not hard-deleted) And the run emits SLA metrics: start/end time, duration, records_read, records_upserted, records_inactivated, error_count And P95 downstream roster-read latency remains ≤ 50 ms within 10 minutes after sync completion
On-Demand Sync Trigger and Progress Visibility
Given an authorized operator triggers an on-demand sync for source X via API or dashboard When the job is accepted Then a job_id is returned within 2 seconds and initial status=queued And progress updates are available at least every 30 seconds with processed_count, total_count (or unknown), rate, ETA, and current_phase And the operator can pause or cancel the job; cancelation stops new writes and leaves partial upserts consistent And upon completion a summary is persisted and viewable (success/fail, metrics, top errors) And roster changes from the job become visible to downstream services within ≤ 60 seconds of job completion
SFTP CSV Ingestion with Schema Validation and Idempotency
Given a CSV file arrives at sftp://…/ingest/mls/Y/YYYYMMDD_agents.csv When the ingestion pipeline detects the file Then the file is checksummed and processed exactly-once; re-uploads with the same checksum are skipped as duplicates And headers are mapped to normalized schema; required columns present: mls_agent_id, license_number, full_name, brokerage_legal_name, status And row-level validation enforces: non-empty license_number and mls_agent_id; status ∈ {Active, Inactive}; office_ids may be empty but valid when present And invalid rows are quarantined with row_number and reason; valid rows continue; job fails if < 98% rows valid for files > 10k rows And a delivery report (processed, upserted, quarantined, errors) is stored and sent to the alert channel
Cross-MLS Deduplication and Conflict Resolution
Given agent records from multiple sources share the same license_number and licensing_jurisdiction When building/refreshing the canonical roster Then a single canonical_agent_id is maintained with links to all contributing sources (source, source_agent_id, office_ids) And conflict resolution applies in order: (1) active beats inactive, (2) most recent source updated_at wins for ties, (3) configured primary_mls overrides for designated markets And full_name and brokerage_legal_name are selected from the winning source; non-conflicting attributes are unioned And every merge decision is audit-logged with rule_applied and source_evidence And on a labeled QA set, false-merge rate ≤ 0.5% and missed-merge rate ≤ 1.0%
Webhook/CDC Delta Consumption with Backoff and DLQ
Given a source provides webhook/CDC events for roster changes When events are received Then they are acknowledged, persisted, and processed with at-least-once semantics And transient failures are retried with exponential backoff (initial 1s, factor 2, max 5 attempts); permanent failures are routed to a DLQ with error_reason and payload And DLQ items can be reprocessed manually or on schedule; reprocess success rate ≥ 95% after remediation And P95 end-to-end lag from event receipt to canonical write ≤ 2 minutes under nominal load (≤ 1k events/min)
Health Dashboard and Alerting for Connectors
Given an operator opens the health dashboard When viewing the connectors list Then each source shows: status ∈ {Healthy, Degraded, Failed}, last_successful_sync, backlog_size, error_rate_24h, p95_sync_duration, event_lag_minutes And threshold breaches trigger alerts to Slack and email within 2 minutes with run_id and top error codes And clicking a source reveals the last 20 runs, failure logs, DLQ counts, and a link to reprocess And access is protected by SSO and role-based permissions (ops can reprocess, viewers cannot)
Low-Latency Roster Read for MLS Roster Match
Given the MLS Roster Match service queries by mls_agent_id, license_number, or name+brokerage When serving 500 RPS with a 99% read workload Then P95 latency ≤ 50 ms and P99 ≤ 150 ms at the service boundary And data freshness P95 ≤ 5 minutes from upstream change; updates visible within 60 seconds post-commit via cache invalidation And availability ≥ 99.9% over a rolling 30 days with no single-region dependency
Agent Identity Matching Engine
"As a showing coordinator, I want the system to reliably recognize visiting agents even with slight name variations so that I can trust the Verified Agent badge."
Description

Create a deterministic+probabilistic matching service that prioritizes exact MLS ID matches and falls back to fuzzy matching on name, brokerage/office, and license. Calculate a confidence score with tunable thresholds, handle aliases and common-name collisions, and prevent false positives via disambiguation rules. Maintain a canonical agent profile with cross-MLS linkages and historical IDs. Expose a low-latency lookup API, return match status (verified/unverified), confidence, and matched attributes. Log all decisions for audit, and provide safeguards against duplicate or conflicting matches.

Acceptance Criteria
Exact MLS ID Auto-Verification
Given a lookup request includes an MLS ID that exists in the trusted roster and no name/brokerage conflicts When the MLS ID exactly matches a single roster record Then the service returns match_status "verified" and confidence >= 0.99 And the response includes matched_attributes: mls_id, canonical_agent_id, legal_name, brokerage_name, office_id, license_number And the decision log records rule "exact_id_match" = true and the roster source version used And the P95 latency for this request type is <= 200 ms over a 5-minute window at 50 RPS
Fuzzy Match Fallback without MLS ID
Given a lookup request omits MLS ID and provides normalized name, brokerage_name, and license_number When the fuzzy matcher yields a best candidate score >= 0.90 and disambiguation rules pass Then the service returns match_status "verified" with confidence equal to the computed score And the response includes matched_attributes for the canonical agent Given a lookup request omits MLS ID and provides normalized name and brokerage_name but no license_number When the best candidate score is between 0.60 and 0.89 inclusive or disambiguation fails Then the service returns match_status "unverified" with confidence equal to the computed score And matched_attributes is empty And the decision log records decision_reason "LOW_CONFIDENCE" or "AMBIGUOUS" Given a lookup provides name and license_number that uniquely match but brokerage mismatch is detected within 90 days of a known brokerage change When the candidate score >= 0.90 Then verification proceeds if license_number corroborates and alias rules permit And the decision log records "brokerage_mismatch_ignored_due_to_license"
Tunable Confidence Thresholds
Given an admin updates verified_threshold to 0.92 and reject_threshold to 0.50 via configuration When a new lookup is processed after the change Then the new thresholds are applied within 2 minutes without service restart And every decision log includes threshold_version and applied values Given a candidate score of 0.91 When verified_threshold is 0.92 Then match_status is "unverified" Given a candidate score of 0.49 When reject_threshold is 0.50 Then match_status is "unverified" And the decision log records decision_reason "BELOW_REJECT_THRESHOLD"
Canonical Profile and Cross-MLS Linkage
Given an agent appears in multiple MLS rosters with a consistent license_number and matching normalized names When the linking job runs Then exactly one canonical_agent_id is maintained And linked_mls_ids includes all active and historical MLS IDs And the lookup API returns matched_attributes including canonical_agent_id and active_mls_id for verified matches And duplicate canonical profiles created per 100k ingested records <= 0.01% Given an MLS changes an agent's MLS ID but license_number remains constant When new rosters are ingested Then the old MLS ID is retained in historical_ids and the new MLS ID becomes active without breaking lookups
Lookup API Contract and Latency
Given the /v1/match lookup endpoint receives a valid request When processed successfully Then the JSON response contains fields match_status ("verified"|"unverified"), confidence (0.00-1.00), and matched_attributes (object) And P95 latency <= 250 ms and P99 <= 500 ms under 50 RPS for mixed traffic over a 5-minute window And identical requests within 5 minutes return the same match_status and confidence (idempotent) Given the /v1/match endpoint receives a syntactically invalid request When validation fails Then the service returns HTTP 400 with a validation error list And no decision is recorded in the audit log
Decision Logging and Auditability
Given any lookup request is processed When a decision is made Then a write-once audit record is stored within 200 ms of the API response And it contains hashed input payload, normalized attributes used, candidate scores, rules fired, final status, confidence, timestamp, roster source versions, and a globally unique decision_id And sensitive identifiers are stored hashed with a daily-rotated salt And audit records are retained for 13 months and are queryable by time range and MLS ID And 100% of requests in a 10k batch produce a corresponding audit record
Duplicate and Conflicting Match Safeguards
Given a lookup input matches two canonical profiles with scores within 0.02 of each other When neither candidate satisfies disambiguation requirements Then match_status is "unverified" And no canonical profile is merged or altered And the audit log records conflict=true with both candidate IDs Given two concurrent processes attempt to create canonical profiles for the same license_number When processed under load Then at most one canonical profile is created using uniqueness constraints or optimistic concurrency And the losing transaction is rolled back without partial state
Real-time Verification & Autofill on QR Submission
"As a visiting buyer’s agent, I want my identity auto-verified and my brokerage filled in as I scan the QR so that I can submit feedback quickly without creating an account."
Description

On QR form open and submission, perform a low-latency roster lookup (<300ms p95) using user-entered MLS ID/name and cached context. If matched, auto-apply the Verified Agent badge, autofill brokerage/office and license fields, and hide redundant inputs to streamline the flow. Implement edge caching and graceful degradation: if offline or slow, queue a deferred verification and label the submission as Pending Verification. Propagate verification status and firm details to the submission record and agent-facing readouts. Emit analytics events for verification outcomes and latencies.

Acceptance Criteria
p95 Lookup Latency Under 300ms on Form Open and Submission
Given the QR form is opened on a stable connection When the form triggers a roster lookup on open and again on submit Then the lookup service achieves p95 latency ≤ 300ms over the last 1,000 lookups per environment And timeouts (> 2000ms) occur in < 1% of attempts And the UI remains responsive (no input fields blocked) while the lookup is in flight
Verified Agent Badge and Autofill on Positive Match
Given the entered MLS ID and name uniquely match a trusted roster entry When the lookup returns a successful match Then a Verified Agent badge is displayed within the form immediately And brokerage, office, and license fields are autofilled with roster values And redundant inputs for these values are hidden and not required And the submission payload sets verificationStatus = "Verified" and includes firm details
No Match or Mismatch — Inputs Visible and No Badge
Given the lookup returns no match or a mismatch between MLS ID and name When the result is received Then no Verified Agent badge is shown And brokerage, office, and license inputs remain visible and required for submission And the submission payload sets verificationStatus = "Unverified"
Offline or Slow Lookup — Deferred Verification and Pending Label
Given the device is offline or the lookup has not completed within 1000ms at submission time When the user submits the QR form Then the submission is accepted without blocking the user And the submission is labeled verificationStatus = "Pending Verification" And a deferred verification job is enqueued with MLS ID, name, and cached context And upon deferred completion, the record is updated to Verified with firm details on match or Unverified on no match, without requiring user action
Propagation to Submission Record and Agent Readouts
Given any verification outcome (Verified, Pending Verification, Unverified) When the submission is saved and later viewed in agent-facing readouts Then the verification badge and firm details render consistently with the stored outcome And the submission record contains verificationStatus, verifiedAt (if applicable), rosterSource, and matchId And these fields are exposed in exports and APIs for downstream use
Edge Caching and Graceful Fallback
Given repeated lookups for the same MLS identifiers or roster segments When lookups are executed under load in staging Then ≥ 70% of successful responses are served from the edge cache over 1,000 requests And cache HIT responses have median latency ≤ 80ms And on cache MISS the system fetches from origin and populates the cache And on edge/cache failure the system falls back to origin without user-visible errors
Analytics for Verification Outcomes and Latencies
Given each lookup attempt at form open or submit When the lookup completes or times out Then an analytics event is emitted with correlationId, outcome (verified|unverified|pending|error), latencyMs, source (formOpen|submit), and cacheStatus (HIT|MISS|BYPASS) And ≥ 99% of events are delivered to the analytics pipeline within 60 seconds And dashboards/reporting expose p50/p95 latency and outcome rates by source and cacheStatus
Admin Overrides & Audit Trail
"As a broker-owner admin, I want the ability to review and correct verification outcomes so that attribution remains clean and compliant."
Description

Provide an admin console to search submissions and matches, view confidence and evidence, approve/reject or remap matches, merge/split canonical agent profiles, and whitelist/blacklist MLS IDs or brokerages. Maintain an immutable audit log of actions and automated decisions with timestamps, actors, inputs, and before/after values. Support exports for compliance and BI. Enforce role-based access control and permissions. Surface KPIs such as verification rate by market and false-positive/negative rates.

Acceptance Criteria
Admin searches submissions and views match evidence
Given an admin with Search permission, When they search by MLS ID, name, brokerage, market, listing ID, or date range, Then results return within 2 seconds for up to 10,000 records and include submission_id, created_at_utc, listing_id, entered_name, matched_agent_id, decision_type (auto/manual), confidence (0–100), and evidence_summary. Given results are paginated 50 per page, When the admin changes page or sorts by created_at or confidence, Then the list updates accordingly and the total count remains accurate. Given a result is opened, When Match Details is viewed, Then it shows matched attributes with per-attribute similarity scores, source roster identifiers, and the rules/threshold version used.
Approve, reject, or remap a match
Given a submission in any decision state, When Approve is applied by an admin, Then the submission is marked verified, the Verified Agent badge appears via API/UI within 60 seconds, and an audit event is recorded. Given a submission, When Reject is applied with a required reason, Then the submission is marked unverified, attribution is cleared, downstream consumers reflect the change within 60 seconds, and an audit event is recorded. Given a submission, When Remap is applied to a selected canonical agent profile, Then all references update to the new profile, decision_type is set to ManualOverride, confidence is set to 100, and an audit event captures before/after agent IDs. Given automation re-runs on the same submission, When a manual override exists, Then the manual decision takes precedence and is not altered.
Merge and split canonical agent profiles
Given duplicate canonical agent profiles A and B, When Merge B into A is executed, Then all linked submissions, identifiers, and rules re-point to A, profile B is marked merged, and an audit event records before/after counts and IDs. Given a merged profile is incorrect, When Split is executed and the admin selects items to move, Then a new canonical profile is created, selected links move to it, and search indexes reflect the change within 5 minutes. Given a merge or split completes, Then no orphaned links remain (0 referential integrity errors across stores).
Manage whitelist and blacklist rules
Given an MLS ID or brokerage, When added to Whitelist with scope (market/listing/global) and optional expiry, Then future submissions within scope auto-verify with decision_type=Whitelist, confidence=100, and an audit event is recorded. Given an MLS ID or brokerage, When added to Blacklist with scope and optional expiry, Then future submissions within scope auto-reject and cannot be verified without Override permission, and an audit event is recorded. Given a whitelist/blacklist entry is edited or removed, When saved, Then before/after values are logged and the change applies immediately to new submissions and to past 30 days if Retroactive Apply is selected.
Immutable audit log for actions and automations
Given any automated decision or admin action occurs, Then a write-once audit event is stored with utc_timestamp, actor_id and role (or system), action_type, target_ids, inputs, and before/after values. Given a user attempts to modify or delete an audit event, When the request is made, Then the system returns 403 Forbidden and records a tamper_attempt audit event. Given audit events are queried by date range, actor, or action_type, When results are returned, Then each event includes an immutable_id and checksum, and no two events share the same immutable_id.
Export audit and match data for compliance and BI
Given an admin selects an export type (audit events, submissions, matches) and time range, When Export is started, Then a CSV and NDJSON are generated within 2 minutes for up to 1,000,000 rows with documented fields and UTC timestamps. Given an export completes, Then a signed download link valid for 24 hours is returned and an audit event is recorded. Given role-based field restrictions, When an export is generated by a non-admin, Then restricted fields are redacted or omitted per policy.
Role-based access and permission enforcement
Given roles Admin, Coordinator, Agent, and ReadOnly exist, When users authenticate, Then their role is attached to the session and included in audit actor role. Given a Coordinator, Agent, or ReadOnly user attempts Approve/Reject/Remap, Merge/Split, or manage Whitelist/Blacklist without permission, Then the action is blocked with 403 and an audit event is recorded. Given permissions are updated by a system owner, When changes are saved, Then they take effect within 1 minute across services and are logged.
Anti-Spam & Abuse Protections
"As a showing coordinator, I want spam and bot submissions filtered before they reach the agent so that only legitimate, licensed feedback is processed."
Description

Add layered defenses including rate limiting by IP/device, behavioral throttles after repeated failed matches, CAPTCHA challenges on anomalies, and HMAC-signed QR tokens to validate listing context. Validate and normalize inputs, detect bursts from suspicious ASNs, and integrate a blocklist/allowlist. Ensure protections are adaptive and do not degrade verified user UX. Provide observability dashboards and alerts for spam attempts and protection efficacy.

Acceptance Criteria
IP and Device Rate Limiting Enforcement
Given a source IP has made 10 verification attempts within 10 minutes, When an additional attempt occurs, Then respond with HTTP 429 and a Retry-After header to the next available slot and log reason=ip_rate_limit. Given a device fingerprint has made 5 verification attempts within 10 minutes, When an additional attempt occurs, Then respond with HTTP 429 and a Retry-After header and log reason=device_rate_limit. Given an IP or device is on the allowlist, When limits are evaluated, Then apply elevated thresholds of 50/IP/10min and 25/device/10min and still log evaluations. Given a rate-limit block is in effect, When the window resets, Then the block automatically clears without manual intervention.
Behavioral Throttle After Repeated Failed Matches
Given 3 consecutive roster match failures for the same IP or device within 5 minutes, When a fourth attempt is made, Then enforce exponential backoff (min 30s doubling to max 5m) and return HTTP 429 with a next_allowed_at timestamp. Given a throttle is active, When a valid roster match occurs, Then reset the throttle and clear backoff for that IP/device for 24 hours. Given no further attempts are made for 60 minutes, When throttle expiry is checked, Then automatically clear the throttle. Given 2 or more distinct names or MLS IDs are rotated on the same device/IP within 5 minutes, When detection occurs, Then maintain the throttle and require a CAPTCHA on the next attempt.
Anomaly-Triggered CAPTCHA Challenges
Given the anomaly score is >= 0.8 based on signals (suspicious ASN, Tor exit node, country mismatch, or rapid submissions), When a verification attempt occurs, Then present a CAPTCHA before roster lookup and proceed only on successful solve. Given a verified session (valid HMAC QR + successful roster match), When anomaly score < 0.95, Then do not present a CAPTCHA. Given a CAPTCHA is presented, When the user fails or times out within 120 seconds, Then block the attempt and log reason=captcha_failed. Given CAPTCHAs are issued, When metrics are aggregated hourly, Then record challenge_rate, solve_rate, and false_positive_estimate.
HMAC-Signed QR Token Validation and Binding
Given a QR token with fields listing_id, nonce, exp, and HMAC-SHA256 signature, When the signature is invalid or exp < now, Then reject with HTTP 403 and reason=invalid_token without performing roster lookup. Given a valid QR token, When verification begins, Then prefill listing_id context and restrict submissions to that listing_id. Given the same token is used from more than 5 unique devices within 10 minutes, When the 6th attempt occurs, Then flag anomaly and require a CAPTCHA before proceeding. Given key rotation occurs, When tokens signed by the prior key are presented within 7 days, Then accept them; otherwise reject with reason=stale_key.
Preserving Verified Agent UX
Given a user successfully matches the MLS roster, When continuing within the same device/session for 24 hours, Then no CAPTCHA is shown and elevated rate limits apply (50/IP/10min, 25/device/10min). Given anti-spam controls are active, When measuring added latency, Then p95 additional overhead is <= 100 ms and p99 <= 200 ms across a rolling 24 hours. Given all verified sessions over a rolling 7 days, When calculating CAPTCHA incidence, Then <= 0.1% of verified sessions see a CAPTCHA; otherwise trigger an alert.
Observability Dashboards and Alerts
Given the system is running, When accessing the Anti-Spam dashboard, Then display time series for total requests, blocks by reason, CAPTCHA challenges/solves, invalid_token rate, rate-limit events, throttle events, and top ASNs/IPs, filterable by listing_id and time ranges (1h/24h/7d). Given anomaly spikes, When invalid HMAC tokens exceed 1% of attempts for 5 consecutive minutes, Then send an alert to Slack #ops-antiabuse and email admins within 2 minutes including links and top sources. Given verified sessions see CAPTCHA > 0.1% for 10 minutes or total block rate > 10% for 5 minutes, When alerting rules evaluate, Then fire alerts and annotate the dashboard. Given a blocked event occurs, When logging, Then emit a structured log with trace_id, reason, ip, device_id, asn, listing_id, and action taken.
ASN Burst Detection and List Management
Given more than 200 attempts from over 50 unique IPs within the same ASN occur in 10 minutes with less than 5% successful matches, When evaluated, Then automatically block that ASN for 30 minutes and log reason=asn_burst_block. Given an ASN or IP is on the allowlist, When conflicts occur with block rules, Then allowlist takes precedence and no block is applied, while still logging and counting metrics. Given an admin updates the blocklist/allowlist via UI or API, When saved, Then changes propagate to enforcement within 60 seconds and are auditable with who/when/what details. Given an ASN block is auto-applied, When the 30-minute window elapses, Then remove the block automatically unless re-triggered.
Privacy, Consent & Data Retention
"As a compliance-conscious admin, I want verification to respect privacy regulations and retention policies so that our brokerage remains compliant while benefiting from verified feedback."
Description

Implement privacy-by-design for roster lookups: display concise consent language on the QR form, minimize PII collection, and encrypt roster and submission data in transit and at rest. Respect MLS data usage terms and support GDPR/CCPA requests (access, deletion). Apply retention schedules for roster snapshots and submissions, redact PII in logs, and restrict access via RBAC. Document data flows and complete a privacy impact assessment.

Acceptance Criteria
QR Consent Capture on Visitor Form
Given a visitor opens the QR form When the form loads Then a consent notice (<=300 chars) describing roster lookup purpose, data usage, and links to Privacy Policy and MLS terms is displayed above the primary action Given the consent checkbox is unchecked When the visitor taps Submit Then submission is blocked and an inline error is shown without collecting any data Given the visitor checks the consent box and submits When the submission is processed Then the system stores consentAccepted=true, consentNoticeVersion, privacyPolicyVersion, timestamp, and locale with the submission Given assistive technology is used When navigating the consent control Then the consent text and checkbox are accessible (WCAG 2.1 AA) and keyboard operable
PII Minimization & Log Redaction
Given the QR form renders When fields are displayed Then only MLS ID (required), Last Name (optional for disambiguation), and feedback inputs are present; no email, phone, address, or free-text contact fields are rendered Given a valid MLS ID is entered When a roster match is found Then Name and Brokerage auto-populate as read-only from the roster and are stored with the submission; no additional PII is stored Given application, access, and analytics logs are emitted When events include identifiers Then MLS ID and names are redacted or replaced with a salted hash; raw PII does not appear in logs Given an error occurs during roster lookup When the error is logged Then the message excludes raw PII and includes only a request correlation ID Given CI runs When static checks scan the codebase Then any logging call containing disallowed keys (mls_id, name, email, phone) fails the build
Encryption In Transit and At Rest
Given any client connects to TourEcho When making requests Then TLS 1.2+ is enforced with HSTS (max-age >= 31536000; includeSubDomains; preload) and no HTTP endpoints are reachable Given external security scans run When testing TLS Then the primary domain scores "A" or better on SSL Labs and disallows weak ciphers and TLS <= 1.1 Given data is stored When persisted to databases, object storage, search indexes, and backups Then AES-256 encryption at rest using KMS-managed keys is applied; keys rotate at least every 365 days Given data exports are generated When files are created Then they are encrypted at rest and delivered via time-limited, signed URLs
Data Subject Access & Deletion Requests
Given a verified requestor proves control of the MLS roster email for an MLS ID When an access request is submitted via admin UI or API Then the system produces a complete, machine-readable export (JSON and CSV) of all associated roster matches and submissions within 15 minutes and notifies the requestor Given a verified deletion request is approved When processed Then personally identifying fields (name, brokerage, MLS ID) are irreversibly deleted from primary stores and search within 24 hours and queued for purge from backups within 35 days; aggregate analytics remain Given a deletion has completed When attempting to retrieve the subject's data via internal tools Then no records containing their PII are returned Given access or deletion actions occur When executed Then an immutable audit log records actor, action, timestamp, and hashed MLS ID
Retention & Deletion Schedules
Given retention policies are configured When no MLS-specific overrides exist Then roster snapshots are retained for 180 days and submissions for 24 months by default Given retention windows elapse When the nightly purge job runs Then expired roster snapshots and submissions are deleted or PII is anonymized, and a purge report is stored for auditing Given an MLS requires a shorter retention When an override is set Then the system applies the MLS-specific schedule to all data originating from that MLS Given a DSAR deletion exists When the purge job runs Then records for that subject are prioritized for immediate purge irrespective of default schedules
RBAC Access Controls on Roster & Submissions
Given role definitions exist (Admin, Coordinator, Listing Agent, Engineer-ReadOnly) When users access roster matches and submissions Then row-level access restricts data to listings within the user's scope; non-scope requests return 403 with no data leakage Given a Listing Agent or Coordinator views feedback for their listing When a submission is verified Then the visitor's name, brokerage, and Verified badge are visible; MLS ID is partially masked by default; raw PII download requires Admin Given an Engineer accesses production When querying application data Then PII fields are masked or denied unless a break-glass approval is recorded; all access is audited Given API tokens with insufficient scopes are used When calling PII endpoints Then the call is rejected with 403 and an audit event is written
Compliance Documentation & MLS Terms
Given documentation is maintained When reviewing compliance artifacts Then a current data flow diagram and records of processing (with systems, data elements, and transfers) are stored in the repo and linked from the runbook Given privacy review is required When changes introduce new data elements or destinations Then a Privacy Impact Assessment is completed and approved by Legal/Privacy before release; artifacts are versioned and dated within the last 12 months Given MLS roster data is ingested When usage checks run Then the system records MLS terms acceptance and expiry per source and blocks ingestion and use if terms are missing, expired, or scope prohibits the intended use Given public UIs display verification outcomes When attribution is shown Then required MLS attribution and fair-use notices are displayed

SMS Trust Link

Frictionless one-tap verification via secure SMS link or fallback PIN—no app or login required. Works across domestic and international numbers so visitors confirm identity in seconds. Boosts scan-to-submit completion while ensuring every response is signed and attributable.

Requirements

One-Tap Magic Link Verification
"As a showing visitor, I want to verify with a single tap from an SMS link so that I can submit feedback quickly without creating an account."
Description

Send a short-lived, signed verification URL via SMS immediately after QR scan or phone capture. A single tap on the link verifies the visitor and deep-links them back into the browser-based feedback flow—no app or login required. Bind the verification to the specific listing, session, and device context; record timestamp and attach a non-reversible phone hash to subsequent submissions for attribution. Support default browsers on iOS/Android, branded short links, and graceful error handling when links are blocked or expired.

Acceptance Criteria
One-Tap Verification SMS Sent After QR Scan
Given a valid listing QR scan or phone capture with a confirmed E.164 phone number When the visitor submits their number Then the system generates a signed, unique magic link bound to listingId, sessionId, and device context And an SMS send request is issued within 2 seconds of submission And the SMS body contains the listing name/address and a branded short link And the send attempt is audit-logged with correlationId and timestamp
Tap Link Deep-Links to Feedback Flow (No App/Login)
Given an unexpired magic link received via SMS on a smartphone using the default browser (Safari on iOS, Chrome on Android) When the visitor taps the link Then the browser opens the feedback flow URL without requiring app install or login And the visitor is marked Verified in the active session And any in-progress feedback state is restored And the page loads to interactive in ≤3 seconds on a typical LTE connection
Short-Lived, Single-Use Links with Graceful Expiry Handling
Given a magic link with a TTL of 10 minutes and single-use semantics When the link is opened after 10 minutes or after a prior successful use Then verification is denied and no session is marked Verified And the visitor sees a clear message: "Link expired or already used" And options to Resend Link and Use PIN are provided And the invalid attempt is audit-logged with reason=expired_or_used
Verification Bound to Listing, Session, and Device Context
Given a magic link issued for listing=L, session=S, device=D When the link is opened on a device/browser that does not match D or session S is invalid Then verification is rejected with guidance to request a new link And if listingId in the link does not map to an active listing, a 404-style error is shown And when the link is opened on device D with valid session S Then verification succeeds and is recorded with listingId, sessionId, device fingerprint, and UTC timestamp
Non-Reversible Phone Hash Attached to Submissions
Given a visitor has been verified via magic link When they submit any feedback event (overall sentiment, room-level objection, comment) Then each event record includes a phoneHash value that is consistent for that phone within the listing And the raw phone number is not stored in the event payload or database record And each event includes verificationTimestamp referencing the verification event And analytics/export surfaces show phoneHash to attribute responses to a verified visitor
International Numbers and Branded Short Link
Given a phone number from US (+1), CA (+1), UK (+44), AU (+61), or IN (+91) that can be normalized to E.164 When SMS verification is requested Then the number is normalized to E.164 and validated And the SMS gateway returns success for each test country route And the magic link uses the configured branded short domain over HTTPS (e.g., go.tourecho.com) And the short URL length is ≤30 characters total And the link resolves to the feedback flow successfully
Fallback PIN and Resend with Abuse Protection
Given the visitor cannot open the link or reports non-receipt When they select Use PIN or Resend Link Then a 6-digit one-time PIN valid for 10 minutes is generated and sent via SMS And entering the correct PIN verifies the visitor equivalently to the magic link And resend is rate-limited to 3 per 5 minutes per phone and 10 per day And PIN entry is limited to 5 failed attempts per session before lockout with clear messaging
PIN Code Fallback
"As a showing visitor, I want a backup PIN option so that I can still verify when the magic link doesn’t open."
Description

Provide a 6-digit PIN verification path when the SMS link cannot be opened or deep links are disabled. Allow code entry on the verification screen, with resend capability, cooldown timers, and lockouts after repeated failures. Seamlessly switch users between magic-link and PIN flows based on device capabilities and user choice while preserving listing/session context.

Acceptance Criteria
Deep link blocked; user verifies via 6‑digit PIN
Given the device cannot open the SMS trust link and a phone number is on file for the session When the user selects "Verify with PIN" Then the system generates a unique 6-digit numeric PIN for the session, sends it via SMS, and displays a 6-slot PIN entry UI with Submit disabled until 6 digits are entered And when the correct PIN is entered within 10 minutes of issuance Then the user is verified, the session is marked trusted, and the user is advanced to the feedback flow without login
Resend with cooldown and max resend limit
Given the user is on the PIN entry screen with a code sent Then the "Resend code" control is disabled for 30 seconds with a visible countdown When cooldown elapses and the user requests a resend Then a new 6-digit PIN is generated, all prior PINs are invalidated, an SMS is sent, and the cooldown resets to 30 seconds And given the user has requested 3 resends within 10 minutes When the user attempts a 4th resend within the same 10-minute window Then the request is blocked and the UI shows the next eligible resend time
Lockout after repeated invalid PIN attempts
Given the user is on the PIN entry screen When the user submits an incorrect PIN 5 times within 15 minutes Then verification is locked for 10 minutes for that phone number and session, further attempts are blocked, and a lockout message with countdown is displayed And when the lockout expires Then the user can request a new PIN and attempt verification again
Seamless switch between magic link and PIN with context preserved
Given the user opened the SMS trust link but chooses "Use PIN instead" or the deep link fails When the user completes PIN verification Then the listing ID and showing/session ID from the original link are preserved and associated with the verified phone identity And the user continues the feedback flow with correct listing/session context and attribution intact
International number support for PIN delivery
Given a visitor provides an E.164-formatted international phone number (non-US/CA) When a PIN is requested Then an SMS containing the 6-digit PIN is delivered within 60 seconds to that number, and the code can be used to verify within 10 minutes And if delivery fails due to carrier issues Then the UI surfaces a retry/resend option and a generic failure message without exposing PII
PIN expiration and invalidation behavior
Given a 6-digit PIN was issued for a session When the user enters the PIN after 10 minutes have elapsed Then verification fails with an "Expired code" message and the UI offers a "Resend code" action And when a new PIN is issued for the session Then all prior PINs for that session are invalid and cannot be used
International Number Support
"As an international buyer, I want my phone number to work seamlessly so that I can verify identity regardless of my carrier or country."
Description

Accept and validate domestic and international phone numbers in E.164 format with auto-country detection and manual override. Ensure Unicode-safe templates, regionalized sender IDs (long/short/alphanumeric where permitted), and multi-carrier routing to maximize deliverability across countries. Provide clear formatting hints, error messaging, and cost-aware routing for global coverage.

Acceptance Criteria
E.164 Input Validation & Normalization
- Given a user enters a phone number with spaces, dashes, or parentheses, When the number is submitted, Then it is normalized to E.164 with a leading + and correct country code. - Given an input that does not meet the valid length or pattern for the detected/selected country, When validation runs, Then submission is blocked and an error is displayed stating "Enter a valid phone number in E.164 (e.g., +44 7123 456789)." - Given a number entered without a + prefix but a country has been selected via manual override, When validation runs, Then the number is normalized to E.164 using the selected country code. - Given an input containing letters, extensions, or multiple plus signs, When validation runs, Then the input is rejected with a specific "Invalid characters" message. - Given a valid number for any supported country, When validation runs, Then the number passes and the normalized E.164 value is stored.
Auto Country Detection with Manual Override
- Given the user has not selected a country, When they focus the phone field, Then the default country is set by precedence: explicit query param > SIM/locale > IP geolocation (configurable fallback). - Given the user starts typing a number with an international prefix (e.g., +33), When validation runs, Then the country auto-updates to match the prefix and the formatting hint updates live. - Given the user manually selects a different country, When they submit, Then validation uses the selected country and auto-detection is suppressed for that input. - Given auto-detection and manual selection disagree, When the user changes the selection, Then no previously entered digits are lost and the number is re-formatted non-destructively. - Given detection fails, When the field renders, Then the country defaults to a configurable global default without blocking input.
Unicode-Safe SMS Templates & Links
- Given a recipient name or listing address contains non-Latin characters or emoji, When the SMS body is generated, Then all characters are preserved and the message is encoded (GSM-7/UCS-2) without mojibake. - Given the message encodes as UCS-2, When cost/segments are calculated, Then the segment size is 70 characters and the estimate reflects this. - Given the SMS body contains a Trust Link URL, When sent, Then the URL renders correctly and remains clickable on iOS and Android default messengers. - Given right-to-left scripts are present, When rendered in SMS, Then punctuation and URL order remain intact (use Unicode bidi marks as needed). - Given variable substitution occurs, When rendered, Then no template exceeds provider max length before segmentation and no placeholders leak.
Regionalized Sender ID Selection & Compliance
- Given a target number in a country that permits alphanumeric sender IDs, When the message is queued, Then the configured alphanumeric sender ID is used. - Given a target number in a country that restricts alphanumeric sender IDs, When the message is queued, Then a compliant registered numeric long code or short code is used. - Given a country-specific registration prerequisite is missing for the chosen route, When sending, Then the send is blocked and a descriptive error is logged and surfaced to admins. - Given per-country sender ID rules are updated, When configuration is deployed, Then messages use the new rules without code changes (rules are data-driven). - Given a message is sent, When delivery reports arrive, Then the chosen sender ID is recorded in message metadata for auditability.
Multi-Carrier Routing, Failover, and DLR Capture
- Given at least two carrier vendors are configured per region, When a message is sent, Then the system selects the primary route based on historical DLR success rate and cost weightings. - Given the primary route returns a hard error or no DLR within 30 seconds, When retry policy runs, Then the message is rerouted to a secondary provider once, preserving the same internal message correlation ID. - Given a message completes, When viewing logs, Then final status is one of {Queued, Sent, Delivered, Failed, Unknown} with provider code and timestamp. - Given a carrier returns a temporary error, When retrying, Then exponential backoff with a maximum of 2 retries is applied before marking Failed. - Given N international messages sent in the last 24 hours, When calculating deliverability, Then the system reports DLR rate per country and per route with ≥95% of events processed within 10 minutes.
Formatting Hints, Live Masking, and Error Messaging
- Given a selected or detected country, When the phone field is focused, Then a country-specific placeholder example (E.164) is displayed, e.g., "+81 70 1234 5678". - Given the user types, When characters are entered, Then live formatting/masking guides input without altering underlying raw digits and does not insert a leading + automatically. - Given validation fails, When the error is shown, Then the message is specific (too short/too long/invalid prefix) and accessible (aria-describedby) and clears once corrected. - Given a screen reader user, When the field is invalid, Then the error is announced and focus remains in the field until corrected or dismissed. - Given rate limiting or backend timeouts occur on validation, When the user retries, Then a non-blocking banner explains the issue and allows resubmission.
Cost-Aware Routing and Billing Transparency
- Given a destination country and message content, When calculating pre-send estimate, Then the system computes expected segments (GSM-7/UCS-2), per-segment cost, and records the estimated total cost. - Given multiple routes are available, When selecting a route, Then the router respects configured cost ceilings and prefers the lowest-cost route within a ±5% deliverability tolerance. - Given final provider billing is returned, When reconciling, Then the system records actual cost per message and flags variances >10% from the estimate. - Given messages exceed a configurable budget threshold for a listing, When attempting to send, Then the send is blocked with a clear reason and an admin override option. - Given a free trial tenant, When sending internationally, Then messages are restricted to a configurable allowlist of countries and the UI communicates the restriction.
Compliance and Consent Management
"As a broker-owner, I want verification to be compliant and permission-based so that my firm avoids legal risk."
Description

Display concise consent language and links to Terms and Privacy at phone capture; store explicit consent metadata (timestamp, phone, IP, listing) for auditability. Honor STOP/HELP keywords, regional quiet hours, and template/route registration requirements. Maintain a suppression list, configurable data retention, and minimal PII storage by hashing phone numbers once verified.

Acceptance Criteria
Consent Disclosure at Phone Capture
Given a visitor enters a phone number on the listing’s SMS verification screen When the phone input and primary CTA are rendered Then the UI displays concise consent text including: purpose of SMS, expected frequency (e.g., one-time verification + optional follow-ups), “Msg & data rates may apply,” and instructions for STOP and HELP And Terms of Service and Privacy Policy links are visible, tappable, and open in a new tab And the CTA remains disabled until the visitor provides explicit consent via either a required, unchecked-by-default checkbox or a CTA labeled with explicit consent (e.g., “Agree & Send SMS”) And the consent text and link targets reflect the current policy versions configured in admin And the consent block passes accessibility checks (links are focusable; color contrast ≥ 4.5:1)
Consent Metadata Capture & Auditability
Given a visitor submits their phone with explicit consent When the submission is accepted Then the system records an immutable audit event containing: ISO-8601 UTC timestamp, E.164-normalized phone (transient until hashing), public IP, listing ID, campaign/template ID (if applicable), UI surface, locale, and policy version IDs (Terms/Privacy/Consent) And the event is write-once and cannot be edited; only appended with redaction markers where applicable And the audit record is retrievable through an admin endpoint within 300 ms p95 for the last 30 days And the admin export includes a verifiable checksum for each record to ensure integrity And failed writes are retried with exponential backoff up to 5 times and surfaced to ops alerting if not persisted
Keyword Opt-Out and Help Handling
Given the system receives an inbound SMS from any verified number When the message text matches STOP, STOPALL, UNSUBSCRIBE, CANCEL, END, or QUIT (case- and punctuation-insensitive) Then the number is added to the global suppression list within 2 seconds and future sends to that number are blocked immediately And a single non-marketing confirmation reply is sent within 5 seconds including brand, confirmation of opt-out, and HELP instructions When the message text matches HELP (case- and punctuation-insensitive) Then an informational reply is sent within 5 seconds including brand, purpose, message frequency disclosure, support contact/URL, and STOP instructions And all keyword events are logged to the audit trail with timestamp, route, and message ID
Regional Quiet Hours Enforcement
Given an outbound SMS is queued to a recipient When the recipient’s local time is resolved using country code + area code (where available) with fallback to listing timezone Then the message is not sent outside the configured quiet hours window (default 08:00–21:00 local) unless environment is explicitly set to test/sandbox And messages blocked by quiet hours are queued with an earliest-send timestamp at the start of the next allowed window And the audit log records the quiet-hours decision, computed local time, window configuration, and next-send time And organization-level configuration supports per-country overrides and holiday blackout dates
Suppression List Management
Given a number is on the suppression list via opt-out, hard bounce, manual admin action, or compliance rule When any workflow attempts to send an SMS to that number Then the send is blocked deterministically using the stored hash of the phone number and the attempt is logged with reason and source And suppression can be scoped globally and per-listing, with the global scope taking precedence And admins can export suppression entries with created-at, source, and scope (phone values exported as hashes only) And re-enablement requires explicit new consent captured after the opt-out event; re-enablement actions are fully audited
Template and Route Registration Compliance
Given an outbound SMS is prepared for delivery When the destination is in a regulated market (e.g., US 10DLC, CA, IN DLT, UK, AU) Then the selected route and sender profile are validated for that destination And only pre-approved templates are used where required, with template/campaign IDs attached to the message And if no compliant route/template exists, the send is blocked before provider submission with a surfaced, actionable error code and no partial delivery And the audit record includes destination country, route ID, sender ID, template ID, and registration status And periodic health checks verify registration status daily and alert on expirations or rejections
Data Retention and PII Minimization
Given a phone number has been verified for SMS When persistence occurs post-verification Then the raw phone is replaced with a salted+peppered one-way hash in primary data stores, retaining the minimal necessary tokens for delivery provider reconciliation outside of our DB And suppression lists and audit logs store only the hash, never the raw phone And admin-configurable retention policies purge consent/audit records after the configured duration while preserving aggregated, non-identifying metrics And a daily purge job runs at 02:00 UTC to enforce retention across primary storage and backups; completion is logged with counts And a right-to-erasure job can be triggered per number (by raw input at request time, hashed internally) and completes across systems within 30 days with a signed deletion report
Secure Tokenization and Replay Protection
"As a listing agent, I want each response to be verifiably tied to a real person so that feedback is trustworthy and attributable."
Description

Issue cryptographically signed, single-use tokens bound to phone number, listing, and session, with short expiration (e.g., 10 minutes). Invalidate tokens on first use, detect and block replays, and throttle send/attempt rates per number/device/IP. Maintain immutable audit logs and attach verified identity markers to each feedback submission to ensure attribution without exposing raw phone numbers.

Acceptance Criteria
Signed Token Issuance and Verification (Phone+Listing+Session)
Given a domestic or international phone number normalized to E.164 and a specific listing and session When the system issues a verification token Then the token payload is bound to phone_hash, listing_id, session_id and includes iat, exp (<=10 minutes), and a unique jti And the token is cryptographically signed and verifiable with the platform public key And any modification of payload or signature causes verification to fail Given a presented token and matching phone, listing, and session prior to exp When the verification endpoint is called Then the response is 200 Verified and returns an opaque identity_id tied to the phone+listing Given a presented token where phone, listing, or session do not match the bound values When the verification endpoint is called Then the response is 401 InvalidToken and the event is logged
Single-Use Invalidation and Replay Blocking
Given a valid unused token When it is successfully redeemed once Then the token jti is immediately marked as used and cannot be redeemed again Given the same token jti is presented again from any IP/device/user agent When the verification endpoint is called Then the response is 409 TokenReplayed and no identity is issued And a replay_detected event is logged with jti, source_ip, device_id Given parallel requests attempt to redeem the same token When processed concurrently Then at most one request succeeds and all others return 409 TokenReplayed
Expiration Enforcement with Clock Skew Tolerance
Given a token with exp set to 10 minutes after iat When redeemed at t <= exp + 30 seconds (server-side skew tolerance) Then tokens redeemed at or before exp are accepted; tokens after exp are rejected Given a token redeemed after its expiration When the verification endpoint is called Then the response is 401 TokenExpired with an error code token_expired and includes Retry-After guidance And the attempt is logged as expired_token Given a token redeemed exactly at exp boundary When verified on servers with up to ±30s clock skew Then behavior is deterministic and documented: acceptance only if server time <= exp
Send Rate Throttling by Number, Device, and IP
Given repeated requests to send SMS trust links When counts exceed any threshold: 3 per phone number per 15 minutes, 5 per device fingerprint per hour, or 20 per IP per hour Then the send endpoint returns 429 TooManyRequests with error codes indicating the tripped limit and a Retry-After header And throttle events are logged with phone_hash, device_id, ip, window, and threshold Given requests stay within thresholds When issuing links across domestic and international numbers Then all sends succeed with 2xx and are logged without throttle
Verification Attempt Throttling and Lockout
Given failed token verification attempts for the same phone/listing/device When failures reach 5 attempts within 10 minutes Then further attempts are blocked with 429 TooManyRequests for 15 minutes (lockout window) and a Retry-After header is returned Given a successful verification occurs before reaching the threshold When the attempt is verified Then failure counters for that phone/listing/device are reset Given distributed sources (different IPs) attempt verifications for the same phone/listing When failures aggregate to the threshold Then throttling still applies based on phone_hash and device_id
Immutable Audit Logging for Token Lifecycle
Given any token lifecycle event (issued, verified_success, verified_fail, token_expired, token_replayed, throttled) When the event occurs Then an append-only audit record is written containing jti, phone_hash, listing_id, session_id, event_type, timestamp, source_ip, device_id, and outcome Given an attempt to modify or delete an existing audit record via admin or service APIs When the operation is executed Then it is rejected with 403 Forbidden and no changes are made Given daily operation When the daily audit digest is produced Then a cryptographic hash of the day’s ordered records is generated and stored; recalculating the digest from raw records matches the stored hash
Verified Identity Marker Attached Without Exposing Phone Numbers
Given a feedback submission completed after successful token verification (SMS link or fallback PIN) When the submission is stored and retrieved via API/UI/exports Then it includes an opaque identity_id and phone_hash and omits raw phone numbers; any displayed phone is masked (e.g., +1••• ••34) Given a database snapshot of submissions and analytics tables When searched for E.164 patterns Then no raw phone numbers are present outside a segregated, encrypted PII store not joined to submissions Given agents view submission attribution When viewing feedback details Then the UI shows a Verified badge and identity marker without revealing the raw phone number
Deliverability and Retry Orchestration
"As an operations manager, I want high SMS delivery rates and automatic retries so that visitors consistently complete verification."
Description

Track delivery receipts and automatically retry via alternate routes or sender types when failures occur. Optimize content length for segment limits, apply branded short links, and adapt retry cadence based on carrier feedback. Provide alerts for systemic delivery issues and enforce per-number rate limits to prevent blocking while maximizing verification completion.

Acceptance Criteria
Delivery receipts are tracked and normalized
- Given an outbound verification SMS is queued, When the provider acknowledges the send, Then the message status is recorded as "sent" within 2 seconds. - Given a delivery receipt is received, When the receipt code maps to delivered/failed/unknown, Then the normalized status is updated accordingly within 5 seconds and audit-logged with provider code, timestamp, and attempt ID. - Given no receipt is received within 5 minutes, When the message TTL expires, Then the status is set to "timeout" and the message is eligible for retry per policy. - Given a message reaches a final state (delivered, failed, timeout), Then it is visible in the verification detail with final state, attempt count, and route used.
Automatic retry via alternate sender and route
- Given an initial attempt fails with temporary carrier/provider codes (e.g., 30003, 30005, 30007, 30008), When retry policy triggers, Then retry up to 2 additional times within 10 minutes using a different sender type (long code → toll-free/short code) and/or alternate aggregator route. - Given an attempt fails with hard-fail codes (e.g., invalid number/unknown subscriber), Then no retries are performed and the flow is marked final-failed. - Given a retry is scheduled, When per-number rate limits would be exceeded, Then the retry is deferred to the earliest safe window. - Given the final retry completes, Then the flow is closed and outcome logged with per-attempt status and sender/route used.
Message length and segmentation optimization
- Given a verification SMS for GSM-7 content, When dynamic text and compliance footer are included, Then total payload including link does not exceed 160 chars; otherwise dynamic copy is compacted and a branded short link is used to meet the limit. - Given UCS-2 encoding is required, When the assembled payload exceeds 70 chars, Then compact text and use a branded short link to target ≤70; if still >70, send no more than 2 segments and record segmentation count. - Given a message is sent, Then the billed segment count equals the calculated segments and is stored for analytics.
Branded short links are applied and attributable
- Given a verification URL is generated, When shortened, Then it uses the configured branded domain (e.g., go.tourecho.com) over HTTPS, with path length ≤ 15 chars. - Given a recipient taps the link, When the verification page loads, Then the click is attributed to the originating phone number and attempt ID, and the attempt is marked "clicked" within 2 seconds. - Given a short link is older than the configured TTL (e.g., 24 hours), When clicked, Then it resolves to an expired page and offers a fresh PIN fallback. - Given UTM/route attribution parameters are appended, Then they are preserved end-to-end and visible in analytics.
Adaptive retry cadence based on carrier feedback
- Given a failure is due to rate limiting or carrier filtering, When detected via provider error codes or delivery feedback, Then the next retry uses exponential backoff (e.g., 1m, 4m) and switches sender type when available. - Given failures persist and the backoff reaches the maximum window (e.g., 15 minutes) without delivery, Then the system sends a fallback PIN via voice or email if available and stops SMS retries. - Given a retry delay is scheduled, Then the send respects quiet-hour rules for the recipient's local timezone.
Proactive alerts for systemic delivery issues
- Given the rolling 5-minute failure rate for a carrier–country–sender tuple exceeds 15% with ≥50 attempts, When the threshold is met, Then an incident alert is sent to Slack and email within 2 minutes and an in-app banner is shown. - Given an incident is opened, When a safe alternate route is available, Then the system auto-reroutes subsequent attempts for the affected tuple until the failure rate stays below 5% for 15 consecutive minutes. - Given an incident is resolved, Then a summary is logged including timeframe, affected tuples, reroute counts, and impact on verification completion rate.
Per-number rate limits and abuse prevention
- Given a phone number requests verification, When SMS sends would exceed 3 within 15 minutes or 6 within 24 hours, Then further sends are blocked, a cooldown message is returned, and a PIN fallback is offered. - Given a successful verification occurs, When additional attempts are pending, Then they are canceled and counters reset. - Given rate limiting is enforced, Then no more than 1 message per second is sent to the same carrier–country–sender tuple, and bursts are spread across 10-second windows to avoid carrier blocking.
Verification Analytics and Admin Controls
"As a broker-owner, I want analytics and controls so that I can optimize completion rates and enforce policy across my team."
Description

Offer dashboards for scan-to-verify and verify-to-submit conversion, time-to-verify, carrier breakdown, and international distribution, with filters by listing, agent, team, and date. Enable CSV export and role-based access. Provide admin controls for link TTL, PIN length, max attempts, locale-specific message templates, and per-listing enablement to tune performance and policy.

Acceptance Criteria
Dashboard Conversion Metrics Visibility and Accuracy
- Given a selected date range and scope, When the verification analytics dashboard loads, Then it displays Unique Scans, Verifications, and Submissions counts that exactly match backend aggregates for that scope. - Given counts N (Unique Scans), M (Verifications), and S (Submissions), When metrics are calculated, Then Scan-to-Verify % = round((M/N)*100, 1) and Verify-to-Submit % = round((S/M)*100, 1); division-by-zero yields 0.0%. - Given there is no data for the selected scope, When the dashboard loads, Then all metric cards show 0 with an empty state and no errors.
Time-to-Verify Distribution and Percentiles
- Given verified events exist in the selected scope, When time-to-verify is computed as seconds between first scan and verification event per visitor, Then median, p90, and p95 are displayed and match backend calculations within ±1 second per metric. - Given filters change, When the dashboard updates, Then median/p90/p95 recompute for the filtered dataset and the chart/table reflect the updated values. - Given no verified events in scope, When the dashboard loads, Then percentile metrics display "N/A" with an empty state and no errors.
Carrier and Country Breakdown Accuracy
- Given verifications with carrier metadata, When the Carrier Breakdown is displayed, Then the sum of percentages across carriers (including "Unknown") equals 100% ±0.1. - Given international numbers, When the Country Distribution is displayed, Then counts are grouped by E.164 country code and match backend aggregates for the selected scope. - Given numbers without carrier metadata, When rendering breakdown, Then they are categorized as "Unknown" and included in totals and percentages.
Analytics Filters by Listing, Agent, Team, and Date
- Given multi-select filters for Listing, Agent, Team, and a Date Range, When the user applies any combination, Then all widgets and tables reflect the intersection of selected filters. - Given a user with scoped permissions, When opening filter pickers, Then only listings/agents/teams within the user's scope are selectable and searchable. - Given default settings, When the page loads initially, Then the Date Range defaults to Last 30 Days and no entity filters are applied.
CSV Export of Filtered Analytics
- Given any filter combination, When the user clicks Export CSV, Then a UTF-8 CSV downloads within 5 seconds containing exactly the dataset represented by the current view. - Then the CSV includes headers: date_start, date_end, listing_id, listing_name, agent_id, team_id, unique_scans, verifications, submissions, scan_to_verify_pct, verify_to_submit_pct, median_ttv_sec, p90_ttv_sec, p95_ttv_sec, top_carrier, top_country. - Then numeric values match the visible metrics within rounding rules and the row count equals the number of rows shown in the on-screen table for the current filters. - Given the user lacks export permission, When clicking Export CSV, Then the action is disabled in UI or returns HTTP 403 and no file is downloaded.
Role-Based Access Controls for Analytics and Admin
- Given roles Admin, Team Lead, Agent, and Auditor, When accessing Analytics, Then Admin sees all organization data; Team Lead sees only their team's; Agent sees only their own listings; Auditor sees all analytics read-only. - Given a user attempts to access data outside their scope, When making UI or API requests, Then a 403 is returned and restricted controls are hidden/disabled in the UI. - Given Admin role, When accessing Admin Controls, Then Admin can edit org defaults and per-listing overrides; Team Lead can edit per-listing overrides for their team; Agent and Auditor cannot edit settings.
Admin Controls: Policy Parameters, Templates, and Per-Listing Overrides
- Given Admin permissions, When setting Link TTL, Then only values between 5 and 30 minutes are accepted; out-of-range values are blocked with inline validation. - Given Admin permissions, When setting PIN length, Then only 4–8 digits are accepted and applied to new verification attempts; existing sessions are unaffected. - Given Admin permissions, When setting Max Attempts, Then only 1–5 attempts are allowed; upon exceeding the limit, the user is blocked and the event is logged to analytics. - Given locale-specific SMS templates, When saving a template per locale, Then required placeholders {verify_link or pin_code}, {listing_address}, and {agent_name} must be present; preview renders correctly; missing placeholders prevent save. - Given a per-listing enablement toggle, When disabled for a listing, Then SMS link sending is suppressed, QR flow falls back to PIN-only with a localized message, and analytics tag attempts as link_disabled. - Given a policy change is saved, When the backend confirms, Then the new policy takes effect for new sessions within 2 minutes and appears in an audit log with actor, timestamp, old_value, and new_value.

Proof Seal

Cryptographically signs each QR submission with time, listing, and verified identity, making edits or impersonation obvious. Produces audit-friendly receipts and watermarked exports, giving brokerages a defensible record for compliance reviews and dispute resolution.

Requirements

Tamper-Proof Submission Signing
"As a broker-owner, I want each QR submission to be cryptographically signed with its listing and timestamp so that any edits or impersonation are detectable and provably invalid."
Description

Implement asymmetric cryptographic signing for every QR submission using platform-managed private keys (KMS/HSM). The signed payload must include: submission_id, listing_id, ISO-8601 UTC timestamp (NTP-synchronized), identity_fingerprint, and client metadata (app version, device type). Use a canonical JSON serialization and JWS envelope with kid for key discovery. Persist the signature alongside the submission and expose a verification routine returning validity, payload digest, and failure reasons. Enforce nonces/idempotency to prevent replay, and tolerate bounded clock skew. All signatures must be verifiable offline with published public keys.

Acceptance Criteria
End-to-end JWS signing via KMS/HSM
Given a new QR submission with submission_id, listing_id, timestamp_utc, identity_fingerprint, and client_metadata When the platform persists the submission Then the platform MUST compute a JCS (RFC 8785) canonical JSON payload and sign it using a non-exportable asymmetric private key stored in KMS/HSM And the signature MUST be encoded as a JWS (compact serialization) with protected header containing alg and kid And the kid MUST resolve to a currently published public key in the platform JWKS And the generated JWS MUST verify successfully using the corresponding public key And the signature (JWS string) MUST be stored alongside the submission record and retrievable by submission_id
Canonical payload with required fields
Given a QR submission is prepared for signing When the payload is serialized Then the payload MUST include exactly these top-level fields: submission_id (UUID), listing_id (UUID), timestamp_utc (ISO-8601 UTC with Z, millisecond precision), identity_fingerprint (64-hex SHA-256), client_metadata (object with app_version and device_type) And client_metadata.app_version MUST be a valid SemVer string (e.g., 1.2.3) And client_metadata.device_type MUST be one of [ios, android, web] And the JSON serialization MUST follow RFC 8785 (JCS): UTF-8, lexicographically sorted keys, no insignificant whitespace, normalized numbers/strings And the SHA-256 digest of the canonical JSON MUST be stable and identical across at least two independent implementations (e.g., server and a reference verifier) for the same logical payload
Verification routine API contract and failure reasons
Given a stored submission and its JWS signature When a caller invokes the verification routine with the JWS (and optionally the raw payload) Then the routine MUST return an object containing: validity (boolean), digest_sha256 (hex), failure_reasons (array of enums), header.alg, header.kid And on success, validity MUST be true and failure_reasons MUST be empty, and digest_sha256 MUST equal the SHA-256 of the canonical payload And on failure, validity MUST be false and failure_reasons MUST include one or more of [BAD_SIGNATURE, UNKNOWN_KEY, PAYLOAD_MISMATCH, MALFORMED_JWS, CLOCK_SKEW_EXCEEDED, REPLAY_DETECTED] And if a raw payload is provided and differs from the canonicalized signed payload, failure_reasons MUST include PAYLOAD_MISMATCH And if the kid is not found in published keys, failure_reasons MUST include UNKNOWN_KEY And the routine MUST respond with HTTP 200 and a structured body for both success and failure cases
Nonce-based idempotency and replay prevention
Given the client supplies a cryptographically random nonce (min 128 bits) with each submission When two identical submissions (same listing_id, identity_fingerprint, and nonce) are received within a 24-hour TTL window Then the platform MUST treat the request idempotently and return the original submission_id and signature without creating a duplicate record And when a previously accepted JWS (same signature) is presented again for verification Then the verification routine MUST return validity=false with failure_reasons including REPLAY_DETECTED And nonces MUST be unique per (listing_id, identity_fingerprint) within the TTL and rejected with HTTP 409 on conflict And nonce TTL MUST be configurable with a default of 24 hours and enforced server-side
Timestamp integrity and clock skew tolerance
Given the platform time is synchronized via NTP to a trusted time source When a submission is signed Then timestamp_utc MUST be generated server-side in ISO-8601 UTC format with a trailing Z and millisecond precision (e.g., 2025-08-29T14:05:23.123Z) And during verification, signatures MUST be accepted if the timestamp_utc is within ±120 seconds of verifier time And if the timestamp_utc deviates beyond the allowed window, validity MUST be false and failure_reasons MUST include CLOCK_SKEW_EXCEEDED And the allowed skew window (in seconds) MUST be configurable with a default of 120 seconds
Offline verification with published public keys (JWKS)
Given the platform publishes public keys at a stable HTTPS JWKS endpoint and a downloadable static JWKS file with Cache-Control max-age ≥ 24h When a verifier has previously downloaded the JWKS and network access is unavailable Then the verifier MUST be able to validate the JWS offline using the cached public keys matched by kid And key rotation MUST preserve retired public keys for at least 90 days after last use so existing signatures remain verifiable And each JWKS key entry MUST include kid, kty, crv or alg as applicable, and use algorithms consistent with the JWS header And a signature created before rotation MUST continue to verify offline using the corresponding cached kid
Persistence and retrieval of signature alongside submission
Given a submission has been stored When the submission is retrieved via API by submission_id Then the response MUST include the original signed canonical payload, the JWS signature string, and the JWS protected header (alg, kid) And the stored JWS string MUST exactly match the one returned at creation time And deleting or updating non-cryptographic submission fields MUST NOT alter the original signed payload or signature record And exports that include the submission MUST embed the JWS and digest so third parties can verify without accessing the platform
Verified Identity Binding
"As a listing agent, I want submitter identities verified and bound to their feedback so that I can trust the source and resolve disputes confidently."
Description

Provide multi-path identity verification and bind the verified identity to the signed payload. Supported paths: (a) buyer agents via OAuth to supported MLS/REALTOR identity providers or license lookup, (b) consumers via SMS OTP with device binding and optional eIDV. Generate an identity token with an assurance level, hash PII to create an identity_fingerprint (salted) stored in the payload, and capture consent. Surface verification status in UI and receipts. Prevent anonymous submissions where brokerage policy requires verification and handle fallback flows when identity providers are unavailable.

Acceptance Criteria
Buyer Agent Verification via MLS OAuth
Given a buyer agent selects "Agent" after scanning the QR for a listing When the agent authenticates via a supported MLS/REALTOR OAuth provider and grants profile and license scopes Then the system issues an identity_token with assurance_level "high" and method "mls_oauth" mapped to the provider and user And the identity_token_id, listing_id, timestamp, and identity_fingerprint are bound into the signed submission payload And the identity_fingerprint is generated by hashing salted PII (name, license_number, state) and stored only as a hash in the payload And consent text is displayed and explicit consent is captured with timestamp before submission is enabled And the UI and receipt display "Verified: High (MLS OAuth - {provider_name})" for the submission
Consumer Verification via SMS OTP with Device Binding and Optional eIDV
Given a consumer selects "Consumer" after scanning the QR for a listing When the consumer enters a mobile number and receives a one-time code And the consumer submits the correct OTP within 5 minutes and within 5 attempts Then the device is bound via device fingerprint; assurance_level is set to "medium" with method "sms_otp_device" And if eIDV is successfully completed in-session, the assurance_level is upgraded to "high" with method "sms_otp_eidv" And if device binding is unavailable but OTP succeeds without eIDV, assurance_level is "low" with method "sms_otp" And the identity_fingerprint is generated by hashing salted PII (phone) and stored only as a hash in the payload And consent is captured before submission, and the signed payload binds identity_token_id, listing_id, timestamp, and identity_fingerprint And the UI and receipt display the resulting verification status and method
License Lookup Verification for Agents
Given an agent selects "License lookup" because MLS OAuth is unavailable or not supported for their market When the agent submits full name, state, and license number And the system matches an active license to the agent via an authoritative registry within 10 seconds Then the system issues an identity_token with assurance_level "medium" and method "license_lookup" And the identity_fingerprint is generated by hashing salted PII (name, license_number, state) and stored only as a hash in the payload And the submission is blocked with an error if no active match is found or data is inconsistent And the UI and receipt display "Verified: Medium (License Lookup)" for successful matches
Identity Provider Outage Fallback and Policy Enforcement
Given a brokerage has a policy "verification_required=true" for the listing When all configured identity providers are unavailable or verification attempts fail Then submission is prevented, the UI displays a blocking message describing the issue, and retry options are provided And no anonymous submission is accepted while the policy is in effect And when "verification_required=false", the user may submit as "Unverified" and the UI and receipt clearly show "Unverified" status And when a primary provider (e.g., MLS OAuth) is unavailable, the system offers supported alternatives (license lookup for agents; SMS OTP/eIDV for consumers) without losing entered data
Consent Capture and Audit Trail
Given a user is on the verification step prior to submitting feedback When the consent text describing data use, retention, and identity binding is displayed Then the submit action remains disabled until the user affirmatively accepts the consent And the consent acceptance event (consent_text_hash, timestamp, locale) is embedded in the signed payload and retained for audit in the receipt And if the user declines consent, the submission is canceled and no PII is persisted beyond rate-limit telemetry
Verification Status Surfacing in UI, Receipts, and Exports
Given a submission with a verification outcome When viewing the submission in the app, receipt, or watermarked export Then the verification badge shows "Verified" or "Unverified" with assurance_level and method And receipts include provider or method, identity_token_id (truncated), consent_text_hash, and signature verification status And watermarked exports include a visible verification watermark and a machine-readable verification_status field
Signature Binding Integrity for Identity Fields
Given a stored submission with a valid signature over listing_id, timestamp, identity_token_id, and identity_fingerprint When any of these bound fields are altered or the identity_token is replaced Then signature verification fails and the UI and receipts indicate "Signature invalid" with no ability to edit to a valid state without re-verification And new submissions always generate a fresh signature over the current identity fields; reuse of a previous signature for a different identity is rejected
Audit Receipt Generation
"As a compliance officer, I want audit-ready receipts for every submission so that I can satisfy reviews without manual reconstruction."
Description

Generate human-readable and machine-verifiable receipts for each submission. Receipts include canonical payload, signature, digest, verification result, identity assurance level, timestamp, listing details, and a verification QR linking to the verifier. Support PDF and JSON formats, localized timezones, and secure delivery (download link, email with expiring URL). Store immutable copies and allow batch receipt export per listing or date range.

Acceptance Criteria
Single Submission Receipt Completeness and Verifiability
Given a verified QR submission exists for a listing When a receipt is generated for that submission Then the receipt includes fields: receipt_id, listing_id, listing_address, timestamp, identity_reference, identity_assurance_level, canonical_payload, digest, signature, verification_result, verification_qr_url And the digest equals the SHA-256 hash of canonical_payload (hex-encoded) And the signature verifies the digest against TourEcho’s published public key And verification_result accurately reflects the signature validation outcome (Pass or Fail) And verification_qr_url is an HTTPS link to the public verifier endpoint for this receipt
PDF Receipt Rendering with Watermark and Verification QR
Given an agent downloads the receipt as PDF When the PDF is opened Then all mandatory receipt fields are presented with human-readable labels And each page contains a Proof Seal watermark displaying the verification_result And a scannable QR code resolves to the public verifier URL for this receipt And the QR code scan displays the same verification_result and receipt_id And the PDF text is selectable/searchable (not image-only)
JSON Receipt Machine Verification via Public Verifier
Given the JSON receipt is obtained When the JSON is submitted to the public verifier endpoint or SDK Then the verifier returns status=valid when the signature and digest match the canonical_payload And the JSON conforms to the published schema (includes receipt_id and schema_version) And altering any field inside canonical_payload causes the verifier to return status=invalid and verification_result=Fail
Timezone Localization for Timestamps
Given the organization or viewer has a timezone preference and the listing has a timezone When generating human-readable views (PDF and email previews) Then timestamps are displayed in the selected timezone with zone abbreviation and UTC offset And if no user preference exists, the listing’s timezone is used And the JSON receipt includes timestamps in ISO 8601 UTC (Z) and a local_timezone field with the IANA zone used for display
Secure Delivery: Download Link and Expiring Email URL
Given a receipt is available for delivery When the agent requests a direct download Then the system issues a signed, time-bound HTTPS URL that expires per configured TTL and returns 410/expired after expiry And access via the URL is logged with actor (if known), IP, timestamp, and format When the agent sends the receipt via email Then the recipient receives an email containing a signed, expiring URL that delivers the requested format before expiry and denies access after expiry And all delivery URLs are unguessable (≥128-bit entropy) and can be manually revoked
Immutable Storage and Audit Logging
Given a receipt has been stored in the repository When any user attempts to edit or overwrite the stored receipt payload, digest, or signature Then the system prevents modification and records the attempt in the audit log And retrieving the stored receipt returns the identical digest and signature as originally issued And creating a corrected receipt creates a new receipt_id linked via a supersedes relationship, leaving the original immutable And the audit log records create, read, and attempted change events with actor, timestamp, and outcome
Batch Receipt Export by Listing or Date Range
Given a broker selects a listing or date range and desired formats When a batch export is requested Then the system produces a ZIP archive containing all matching receipts in the selected formats (PDF and/or JSON) And the archive includes a manifest (CSV or JSON) listing receipt_id, listing_id, timestamp, digest, and verification_result for each entry And the archive is delivered via a signed, expiring download URL And if no receipts match, the archive contains an empty manifest and the request completes successfully
Public Verification Portal
"As a third-party auditor, I want an independent portal to verify receipts so that I can validate evidence without relying on internal claims."
Description

Provide a public, rate-limited web portal and API where users can upload a receipt or paste a signature to validate authenticity. The portal fetches current and historical public keys via JWKS, verifies inclusion of listing_id and timestamp, and displays pass/fail with detailed diagnostics without exposing PII. Ensure high availability, TLS-only access, and exportable verification reports. Publish a trust page with current key fingerprints and status history.

Acceptance Criteria
Valid receipt upload verification passes without PII exposure
Given a valid Proof Seal receipt file containing a signed payload with listing_id and timestamp And the corresponding public key (kid) is present in the JWKS (current or historical) When the user uploads the receipt via the HTTPS web portal Then the system fetches the JWKS over HTTPS, selects the key by kid, and validates the signature And displays Pass with HTTP 200 within 2 seconds And shows listing_id and timestamp (ISO 8601 UTC) and diagnostic fields (kid, alg, key_fingerprint, verification_time) And does not display or return any PII fields (e.g., name, phone, email); such fields are omitted or redacted
Pasted signature and payload verification yields detailed diagnostics
Given a user pastes a detached signature and the associated payload JSON containing listing_id and timestamp When the user submits verification via the portal or POST /api/verify Then the response is JSON with pass (boolean), reason_codes (array), diagnostics (object), and request_id (string) And on failure, reason_codes includes one or more of: unknown_kid, invalid_signature, missing_listing_id, missing_timestamp, timestamp_unparseable, timestamp_out_of_range And no PII fields are present in the response body
Verification using rotated and historical keys via JWKS
Given a receipt signed with an older key whose kid is no longer active And the JWKS endpoint exposes current and historical keys When verification is performed Then the system validates against the historical key matching the kid and returns Pass if the signature is valid And on kid miss, the system refreshes JWKS (respecting Cache-Control) before failing with unknown_kid And all JWKS requests are over HTTPS with certificate validation
Rate limiting on public portal and API
Given a single client IP makes more than 30 web verifications within 60 seconds When additional verification requests arrive via the portal Then the portal responds 429 Too Many Requests with Retry-After and X-RateLimit-Limit/X-RateLimit-Remaining headers And a single client IP making up to 10 web verifications per minute is not rate-limited And for API requests, more than 60 POST /api/verify requests per minute per IP returns 429 with the same headers
Secure transport and high availability
Given client access to the public portal and API When requests are made over HTTP or HTTPS Then all endpoints require HTTPS; HTTP requests receive 301 to HTTPS, and HTTPS responses include HSTS max-age ≥ 15552000 with includeSubDomains and preload And only TLS v1.2 or higher is accepted; weaker protocols/ciphers are rejected And the service achieves ≥ 99.9% monthly availability as measured by external checks to GET /healthz every minute And GET /healthz returns 200 with body {"status":"ok"} within 300 ms at p95 during steady state
Exportable verification reports from results
Given a completed verification (Pass or Fail) When the user selects Export as PDF or Export as JSON Then the downloaded file includes: pass/fail, listing_id, timestamp (ISO 8601 UTC), kid, key_fingerprint (SHA-256), JWKS URL, verification_time, reason_codes (if any), and a unique report_id And the PDF contains a visible watermark 'Verified via TourEcho Proof Seal' and an embedded document hash And the export is generated within 2 seconds and contains no PII fields
Public trust page with key fingerprints and status history
Given a user visits /trust or fetches /trust.json over HTTPS When the page or feed is loaded Then it lists each key with kid, SHA-256 fingerprint, status (active, retired, compromised), and activation/retirement timestamps And includes a rotation history of the last 24 months And updates within 5 minutes of any key status change And contains no PII and is publicly cacheable with Cache-Control: public, max-age=300
Watermarked Exports & Redaction
"As a brokerage admin, I want watermarked, redacted exports to share with stakeholders so that sensitive data is controlled while maintaining evidentiary value."
Description

Enable watermarked exports of submission data and AI summaries with recipient-specific identifiers, case IDs, and timestamps. Support configurable redaction of PII while preserving identity_fingerprint and verification status. Embed a tamper-evident hash and verification QR in each export. Provide PDF/CSV/JSON formats, bulk export for listings, and traceable access logs for each generated file.

Acceptance Criteria
Export PDF with Recipient Watermark and Verification Elements
Given an authenticated brokerage user with export permissions and a selected listing, case ID, and recipient identifier When the user exports submission data and AI summary as a PDF with watermarking enabled Then every page renders a visible watermark containing recipient identifier, case ID, listing ID, exporter user ID, and UTC ISO-8601 export timestamp And the first page and footer of each page include a verification QR that resolves to the verify endpoint with the document hash And the PDF embeds a SHA-256 checksum in XMP metadata and prints its short form adjacent to the QR And the checksum in metadata matches the checksum computed by the export service and returned in the API response And the API response includes file_id, storage URI, MIME type application/pdf, byte size, checksum, and export_timestamp
PII Redaction Preserves Identity Fingerprint and Verification Status
Given a redaction policy set to PII-Strict that masks name, email, phone, and free-text PII patterns When the user exports submission data in any supported format Then personally identifiable fields (full_name, email, phone) are replaced with deterministic masks (e.g., ****1234) or [REDACTED] per policy And detected PII within free-text comments is masked per policy while non-PII content remains intact And fields identity_fingerprint and verification_status are preserved unaltered And export metadata includes redaction_policy_id, redaction_version, and redaction_summary.counts And a policy-application audit entry is written linking file_id and policy_id
Multi-Format Exports Maintain Field Parity and Proof Data
Given a fixed set of submissions for a listing and a chosen redaction policy When the user exports the same dataset to PDF, CSV, and JSON Then CSV and JSON contain identical record counts and matching field sets (post-redaction) And both CSV and JSON include proof fields: _proof.hash (SHA-256 of file), _proof.qr_url, _watermark.recipient, _watermark.case_id, listing_id, export_timestamp And the PDF contains equivalent proof via embedded metadata and visible watermark/QR And computed checksums for each file match values recorded in the API response and, for PDFs, printed short checksum And a cross-format consistency check passes with zero discrepancies
Bulk Listing Export with Manifest and Per-File Proof
Given a listing with N submissions and the user selects Bulk Export with formats PDF, CSV, and JSON When the export is generated Then the result is a ZIP archive containing: submissions.csv (all records), submissions.ndjson (one JSON per line), a /pdf/ directory with one PDF per submission, and manifest.json And manifest.json lists listing_id, case_id, export_timestamp, redaction_policy_id, and an array of all files with byte size, SHA-256, and path And every PDF includes watermark + QR; CSV/NDJSON include proof fields; all file hashes match those in manifest.json And the ZIP itself has a top-level SHA-256 returned by the API and stored in manifest.json as archive_hash And generation completes within 2 minutes for up to 5,000 submissions and streams progress without error
Verification QR Resolves and Confirms Tamper Status
Given a generated export file with embedded hash and a QR pointing to /verify?doc=<file_id>&hash=<sha256> When the QR is scanned or the URL is opened by an external reviewer with read access Then the verify endpoint responds within 2 seconds with status Valid when the stored checksum equals the presented checksum And the response displays listing_id, case_id, recipient identifier, export_timestamp, verification_status, and identity_fingerprint fingerprint prefix And if the file bytes differ from the stored checksum, the endpoint returns Tamper Detected with HTTP 409 and logs the event And the page includes a downloadable audit receipt (PDF) containing the verification result and evidence
Access Logging for Export Generation and Downloads
Given any export generation or download action When the action occurs Then an immutable access log entry is recorded with actor_id (or service account), actor_role, action (generate|download), file_id, listing_id, case_id, redaction_policy_id, timestamp (UTC ISO-8601 with ms), IP, user_agent, outcome (success|failure), and checksum And logs are queryable by date range, listing_id, actor_id, and file_id and return results within 1 second for up to 100k entries And a tamper-evident chain (hash of previous entry) is maintained per file_id stream And exporting access logs produces a signed CSV/JSON with its own checksum and verification QR
Key Management & Rotation
"As a security lead, I want robust key rotation and revocation so that signatures remain trustworthy over time and compromised keys can be retired safely."
Description

Manage asymmetric keys in a dedicated KMS/HSM with least-privilege access, audit logging, and automated rotation. Support key versioning, scheduled and emergency rotation, and revocation with grace periods. Expose a JWKS endpoint with active and retired public keys, and ensure all signatures include kid and alg. Provide staging keys for non-production and documented disaster recovery and backup procedures compliant with industry guidance.

Acceptance Criteria
Automated Scheduled Key Rotation in KMS/HSM
Given an active asymmetric signing key in a dedicated KMS/HSM and a 90-day rotation policy is configured When the rotation date is reached Then a new key version is created in the same KMS/HSM with non-exportable key material And the new version is assigned a globally unique kid. Given the new key version exists When the signing service polls for configuration Then it begins using the new key for all new signatures within 5 minutes And ceases signing with the previous version. Given rotation has occurred When verifying submissions signed before and after rotation Then both verify successfully: pre-rotation signatures validate with the previous public key and post-rotation signatures validate with the new public key. Given rotation completes When audit logs are queried Then a log entry exists for key creation and signer switchover including actor/principal, timestamp (UTC), old_kid, new_kid, and result.
Emergency Key Rotation and Propagation
Given a security incident requiring immediate rotation When an authorized key admin triggers emergency rotation Then a new key version is created within 1 minute And all signing services switch to the new kid within 5 minutes And the previously active key is disabled for signing but remains enabled for verification for 24 hours unless explicitly revoked. Given emergency rotation completes When JWKS is requested Then it publishes the new public key within 60 seconds and continues to include the previous key for verification. Given emergency rotation completes When audit logs are reviewed Then entries exist for rotation trigger, key creation, signer switch, and JWKS publish with timestamps and success statuses.
JWKS Endpoint with Active and Retired Keys and Caching
Given a client requests GET /.well-known/jwks.json When the response is returned Then it is HTTP 200 with application/json and contains a keys array where each item includes kid, kty, alg, use="sig", and appropriate parameters (n,e for RSA or crv,x,y for EC). Given keys are rotated or marked retired but still within their verification window When JWKS is requested Then both active and retired-but-valid public keys are present And each kid is unique and never reused across versions. Given the JWKS is served When inspecting headers Then Cache-Control: max-age=300 and an ETag are present. Given a client provides If-None-Match with the current ETag When JWKS is requested Then the service responds 304 Not Modified. Given a key passes out of its verification window or is fully revoked When JWKS is requested Then that key is no longer present in the keys array.
Signature Metadata: kid and alg
Given a QR submission is signed When inspecting the JWS header Then kid and alg are present And alg is one of the allowed algorithms (RS256 or ES256) And kid matches a key id present in the current JWKS. Given the kid does not exist in JWKS When verifying the signature Then verification fails and the submission is rejected with a 4xx error and an audit record is created. Given the alg does not match the referenced key type When verifying the signature Then verification fails and the submission is rejected with a 4xx error and an audit record is created.
Graceful Revocation and Verification Window
Given a key version is marked for revocation with a 48-hour grace period When verifying submissions during that window Then signatures created before the revocation cutoff continue to verify And the signing service does not produce any new signatures with that key. Given the grace period has elapsed When verifying a submission signed with the revoked kid Then verification fails and the key is absent from JWKS. Given a compromise requires immediate revocation (zero grace) When revocation is executed Then signing with that key is blocked within 1 minute And JWKS removes the key within 60 seconds And verification fails for all submissions referencing that kid after revocation time. Given verification during a grace window When evaluating a signature Then the signed submission timestamp (embedded in the payload) is used to determine eligibility relative to the revocation cutoff.
Access Control and Audit Logging in KMS/HSM
Given KMS/HSM policies are configured When IAM permissions are evaluated Then only the signing service role can perform sign operations; only the key admin role can create/rotate/revoke keys; and key material export is disabled for all principals. Given any key operation (create, rotate, revoke, sign) occurs When audit logs are queried Then an immutable log entry exists within 60 seconds including principal, action, key id, kid, result, and timestamp (UTC). Given an unauthorized principal attempts a restricted key operation When logs and alerts are reviewed Then the attempt is denied, a corresponding audit entry exists, and an alert is delivered to the security channel within 5 minutes.
Non-Production Staging Keys Isolation and Disaster Recovery
Given staging and production environments When inspecting KMS/HSM and JWKS configuration Then keys and endpoints are segregated by environment And submissions signed with non-production keys are rejected in production and vice versa with 4xx responses and audit entries. Given backups are configured When compliance evidence is requested Then documentation shows backup schedule, encryption at rest, and the most recent successful restore test completed within the last 90 days. Given a disaster recovery drill is initiated When executing the documented runbook Then keys and configuration are restored without exposing plaintext keys And service resumes within RTO ≤ 4 hours with data loss within RPO ≤ 15 minutes And verification of previously issued signatures succeeds post-restore.
Immutable Transparency Log
"As a dispute resolution manager, I want an immutable log of all events so that I can prove chronology and integrity during challenges."
Description

Record every signed submission in an append-only transparency log backed by a Merkle tree. Provide inclusion and consistency proofs via API, periodic checkpoints anchored to an external timestamping service, and retention policies aligned with brokerage compliance. Expose query tools to retrieve event history by listing or time window and export proof bundles for dispute cases.

Acceptance Criteria
Append-Only Log Ingestion for Signed Submission
Given a valid, uniquely identified signed QR submission, when it is posted to the transparency log API, then a new leaf is appended, a monotonically increasing leaf index is returned, and the API responds 201 with entryId, leafIndex, treeSize, and currentRoot. Given the same submissionId is posted again, when idempotency is enforced, then the API responds 200 with the original entryId and leafIndex and no new leaf is created. Given any request to update or delete an existing log entry, when the API receives it, then the API responds 405 and no change occurs to the log. Given a submission with an invalid signature or missing verifier identity, when it is posted, then the API responds 400 and no leaf is appended. Given concurrent submissions, when 100 requests are posted within 1 second, then all successful entries have strictly increasing leaf indices with no gaps and median append latency is ≤ 300 ms.
Inclusion Proof Retrieval via API
Given a valid entryId, when a client requests an inclusion proof for the current tree head, then the API responds 200 with leafHash, auditPath, treeSize, rootHash, hashAlgorithm, and proof verifies locally against rootHash. Given a valid entryId and a specified checkpointId, when a client requests an inclusion proof, then the API responds 200 with a proof that verifies against the checkpoint root associated with checkpointId. Given a non-existent entryId, when an inclusion proof is requested, then the API responds 404. Given rate limits of 60 requests per minute per API key, when a client exceeds the limit, then the API responds 429 with retry-after.
Consistency Proof Across Checkpoints
Given two checkpoints with tree sizes s1 < s2, when a client requests a consistency proof between them, then the API responds 200 with a proof that validates per RFC6962 consistency verification and local verification returns true. Given the checkpoints are provided out of order or unknown, when a client requests a consistency proof, then the API responds 400 for out-of-order or 404 for unknown checkpoints. Given a newly created checkpoint, when a consistency proof is requested from the previous checkpoint, then the proof length is O(log s2) and verification completes in ≤ 200 ms for s2 ≤ 10^6.
Periodic External Timestamp Anchoring
Given new log entries exist since the last anchor, when the anchoring interval of 60 minutes elapses, then the system creates a new checkpoint, submits the root to the external timestamping service, and stores a signed anchor receipt with service identity and timestamp. Given the anchoring service is temporarily unavailable, when anchoring is attempted, then the system retries with exponential backoff for up to 24 hours and emits an operational alert without losing entries. Given an anchor receipt, when a client requests verification via the API, then the API returns the receipt and chain data and local verification of the signature and timestamp succeeds. Given no new entries since the last anchor, when the interval elapses, then no new anchor is created and the last anchor remains discoverable via the listing endpoint.
Compliance Retention and Redaction Policy
Given a brokerage retention policy of 7 years configured at the organization level, when an entry reaches its retention expiry, then the payload is redacted and tombstoned while the leaf hash and Merkle position remain queryable, and fetching the payload returns 410 Gone with redaction metadata. Given a legal hold is applied to a listing, when entries under hold reach their retention date, then no redaction occurs until the hold is released and audit logs record the exemption. Given an admin requests a retention audit report for a date range, when the report is generated, then it lists counts of redacted, held, and active entries and includes policy version, timestamps, and signatures verifiable against the transparency log. Given an attempt to purge or rewrite historical leaves, when the purge job runs, then the job is rejected and logged; only redaction metadata is appended as a new event.
Event History Query by Listing and Time Window
Given a valid listingId and a time window, when a client queries the event history, then the API returns 200 with a chronologically ordered, paginated list of entries limited to pageSize with stable cursor-based pagination. Given the query response includes entries, when the client inspects each entry, then each item includes entryId, submissionTimestamp, submitterId, leafIndex, leafHash, and a URL to fetch an inclusion proof. Given an unauthorized user queries a listing they do not have access to, when the API receives the request, then it responds 403 and no data is leaked. Given up to 10,000 entries match the query, when the client requests the first page with pageSize=500, then the API responds within ≤ 2 seconds and returns exactly 500 entries unless fewer are available.
Dispute Case Export with Proof Bundle
Given a listingId and a specified time window, when a dispute export is requested, then the system produces a bundle containing canonicalized submissions, submitter identities, signatures, inclusion proofs for each entry, a consistency proof to the latest checkpoint, and the latest anchor receipt, and returns a bundle hash. Given the bundle is downloaded twice for the same parameters, when the files are hashed, then the bundle hash is identical across downloads. Given any file within the bundle is modified, when the verification API or CLI validates the bundle, then verification fails and identifies the tampered component. Given a PDF export is generated, when the file is opened, then each page includes a visible watermark with bundle hash, generation timestamp, and listingId.

Badge Everywhere

Surfaces Verified Agent status across schedules, notifications, seller views, and exports. Filters and sorting elevate high‑trust feedback first, helping agents prioritize follow-up and explain confidence levels to sellers with clear, visible markers.

Requirements

Universal Verified Badge Rendering
"As a listing agent, I want a clear Verified Agent badge shown anywhere I view feedback or agent identities so that I can immediately trust and prioritize high‑credibility inputs without extra clicks."
Description

Implement a consistent, reusable Verified Agent badge component that appears anywhere an agent identity or feedback is displayed, including schedules, appointment details, feedback lists, seller portal views, notifications, mobile screens, and admin dashboards. The component must support size variants, light/dark themes, and contextual placements (list, card, modal, notification). Provide accessible labels and tooltips that explain “Verified Agent,” with localized copy. Handle states for verified, pending, expired, and revoked verification with distinct visuals. Expose a click/tap action to open a status explainer modal that outlines what “Verified” means and how it’s determined. Ensure performant rendering for large lists (virtualized lists, icon sprite usage) and consistent styling across web and mobile via a shared design token. Provide QA hooks and analytics events for impressions and interactions.

Acceptance Criteria
Badge renders across all supported surfaces
Given an authenticated user views any of: Schedules list, Appointment Detail, Feedback List, Seller Portal, Notification Center, Mobile list/card views, or Admin Dashboard When an agent identity or their feedback is displayed Then the Verified badge component renders next to the agent name/avatar in each surface using the surface’s defined placement And when the viewing entity is anonymous, no badge is rendered And the component visually aligns with adjacent elements without overlap, truncation, or layout shift (CLS ≤ 0.01 on badge load)
Distinct visuals for verification states
Given agent verification states: verified, pending, expired, revoked When the badge renders for each state Then each state uses a distinct icon and color token: verified=check+color.badge.verified, pending=clock+color.badge.pending, expired=alert+color.badge.expired, revoked=block+color.badge.revoked And each state exposes an aria-label and tooltip that includes the localized state name And the badge never displays a verified icon when the state is pending, expired, or revoked
Accessibility and localization coverage
Given the badge is focusable by keyboard When the badge receives keyboard focus or hover Then an accessible tooltip appears and is screen-reader readable with text “Verified Agent — [state]” And the badge and tooltip meet WCAG 2.1 AA contrast (text ≥ 4.5:1; non-text graphics ≥ 3:1) And screen readers announce the element as a button with role=button and aria-label in the active locale And localized strings exist for at least en and es; when a translation key is missing, English fallback is used without placeholder keys
Explainer modal opens and is accessible
Given the badge is rendered When the user clicks/taps the badge or presses Enter/Space while focused Then a modal opens with title “Verified Agent”, a state description, an explanation of verification criteria, the agent’s last verification timestamp, and a “Learn more” link And the modal traps focus, is dismissible via ESC, close button, and overlay click, and returns focus to the originating badge on close And the modal content is localized to the active locale
Performance and virtualization in large lists
Given a virtualized list with 1,000 feedback rows mixing all verification states When the list loads and the user scrolls from top to bottom Then only badges within the viewport ±1 buffer render (≤ 100 concurrently) And the icon sprite is requested once (≤ 1 network request for badge icons per session) And 95th percentile per-item badge render work ≤ 16 ms, with average scroll FPS ≥ 55 on a mid-tier device
Theme and size variants
Given light and dark themes and size variants xs, sm, md, lg When the badge renders in list, card, modal, and notification contexts Then the component uses size mapping: list=sm, card=md, modal=md, notification=xs And colors, spacing, and typography switch with theme via shared design tokens without hard-coded values And there are no illegible states in dark theme (contrast rules still pass)
Analytics events and QA hooks
Given the badge component renders When the badge first appears on a surface per session Then an analytics event badge_impression is sent with properties: agent_id_hash, verification_state, surface, placement, theme, variant, locale, timestamp And when the badge is activated to open the modal, event badge_click is sent with the same properties plus modal_version And the component exposes stable QA hooks: data-testid/testID formatted as badge-[state]-[surface]-[variant]
Verification Status Data Model & Sync
"As a broker-owner, I want agent verification to be accurately sourced and kept in sync so that my team and sellers can rely on the badge and trust indicators being up to date."
Description

Create a canonical verification domain model with fields for agent_id, verification_status (verified|pending|unverified|expired|revoked), verified_at, expires_at, verifier_source, trust_score, and audit metadata. Integrate with external verification sources (e.g., brokerage SSO/roster, MLS credentials, provider webhooks) via scheduled jobs and real-time callbacks. Implement graceful fallbacks when sources are unreachable, with TTL-based caching and retry policies. Support broker-owner manual override with reason codes and audit logs. Expose verification data via internal APIs with tenant scoping and row‑level security. Enforce data minimization for PII, and store source-of-truth references rather than sensitive documents. Emit events for changes in status to update UI, re-rank feedback, and trigger notifications.

Acceptance Criteria
Canonical Verification Model Persistence
Given a new verification record is created When fields include agent_id, verification_status in {verified,pending,unverified,expired,revoked}, verified_at (nullable), expires_at (nullable), verifier_source, trust_score (0–100), and audit metadata Then the record persists and can be retrieved with exactly these fields and without any stored documents or PII beyond agent_id and provider reference IDs. Given an agent_id already exists When another record is inserted for the same agent_id Then the insert is rejected with a unique constraint error. Given expires_at is in the past When the record is read or a maintenance job runs Then verification_status is set to expired if not revoked and verified_at is unchanged, and a status-change event is emitted. Given verification_status is updated When the new status is outside the allowed set Then the update is rejected. Given trust_score is updated outside 0–100 When the update is attempted Then the update is rejected and no partial changes are saved.
External Source Sync via Scheduled Jobs
Given brokerage SSO roster and MLS endpoints are reachable When the sync job runs every 15 minutes Then it fetches deltas, maps provider statuses to canonical values, and upserts matching agents within the tenant. Given the job processes records When the same batch is reprocessed Then results are idempotent with no duplicate audit entries and last_synced_at updated. Given a provider returns mixed successes/failures When the job completes Then successful updates commit; failed records are retried with exponential backoff (initial 1m, max 30m, max 5 attempts) and are tracked with metrics. Given a single run processes 10,000 records When the job executes Then it completes within 5 minutes p95 and emits operational metrics (processed, updated, failed, retried).
Real-time Callback/Webhook Handling
Given a provider webhook is received When the HMAC-SHA256 signature validates within a 2-minute skew window Then the event is accepted and 200 is returned within 2 seconds p95; otherwise 401 is returned and no state change occurs. Given a duplicate event_id within 24 hours When it is received again Then it is deduplicated and no second update or audit entry is created. Given events arrive out of order When an older event attempts to overwrite a newer status based on provider timestamp/version Then the older event is ignored and logged, preserving the most recent provider update.
Graceful Fallbacks with TTL Caching and Retries
Given an upstream source is unreachable or times out (>3s) When verification data is requested Then the system serves the last known value not older than 60 minutes and marks freshness as stale=true in the response. Given the source remains unreachable When retries are scheduled Then exponential backoff with jitter is used up to 5 attempts; after that, a circuit breaker opens for 10 minutes and alerts are emitted. Given cached data exceeds TTL and no fresh data is obtainable When verification_status was verified with expires_at in the future Then the status remains verified but flagged stale; if expires_at has passed, then status becomes expired and an event is emitted.
Broker-Owner Manual Override with Audit Trail
Given a user with broker_owner role When they apply a manual override with a reason_code in {agent_provided_proof, known_fraud, dispute_pending, admin_correction, other} and optional note Then the override updates verification_status/trust_score immediately, supersedes external data, and emits an override event. Given any manual override When stored Then audit metadata records actor_type=broker_owner, actor_id, timestamp, prior_value, new_value, reason_code, source=manual_override, and correlation_id. Given a non-broker_owner user attempts an override When the request is made Then the request is denied with 403 and no state change occurs. Given an override is rescinded by a broker_owner When the rescind action is confirmed Then the system restores the last provider-derived state and logs the reversal in the audit trail.
Internal API Exposure with Tenant Scoping & RLS
Given a service calls GET /internal/verification?agent_id=... When the caller’s tenant matches the agent’s tenant Then the API returns agent_id, verification_status, verified_at, expires_at, verifier_source, trust_score, and freshness flag with p95 latency <= 300 ms and no PII beyond allowed fields. Given a batch query for up to 500 agents When called Then the API returns results with pagination, p95 latency <= 800 ms, and enforces field-level minimization. Given a cross-tenant access attempt When RLS policies apply Then the query is denied (HTTP 403) and access is logged; no data leakage occurs. Given traffic spikes to 50 RPS sustained When serving the endpoint Then error rate remains <1% and per-tenant rate limits are enforced.
Event Emission on Status/Score Changes
Given verification_status or trust_score changes (including auto-expire and overrides) When the transaction commits Then a VerificationStatusChanged v1 event is published within 2 seconds containing tenant_id, agent_id, old_value, new_value, change_source, occurred_at, and correlation_id. Given at-least-once delivery semantics When consumers receive duplicate events Then events include a deterministic event_id for consumer deduplication. Given multiple updates for the same agent When published Then per-agent ordering is preserved (e.g., by partition key agent_id) and publish success rate >= 99.9% over 5-minute windows is observed in monitoring.
Trust-First Sorting & Filtering
"As a busy listing agent, I want verified feedback surfaced to the top and easy filters to narrow to verified-only so that I can prioritize follow-up that is most likely to move the listing."
Description

Introduce default sorting that elevates feedback from Verified Agents ahead of unverified entries, with deterministic tie-breakers (recency, completeness). Provide filters and quick toggles such as “Verified only,” “Show all,” and “Hide unverified.” Persist user preferences per user and per listing across sessions. Extend API endpoints with query params for sort_by=trust and filter=verified to ensure parity between UI and exports. Clearly label elevated items with an inline ‘Verified prioritized’ indicator and an info tooltip explaining the ranking. Ensure performance via proper indexing and pagination, and provide safeguards to respect explicit user sort overrides.

Acceptance Criteria
Default Trust-First Sort with Deterministic Tie-Breakers
Given a listing feedback list with a mix of verified and unverified agent entries and no saved user sort When the page loads Then the list is sorted with Verified=true entries first And within the same verification status items are ordered by created_at descending And when created_at is identical items are ordered by completeness_percent descending And when still identical items are ordered by feedback_id ascending to ensure stable ordering And each verified entry displays an inline "Verified prioritized" indicator next to the agent identity And when there are zero verified entries the list falls back to created_at descending with completeness and id tie-breakers and no "Verified prioritized" indicators are shown
Filter Toggles: Verified Only, Show All, Hide Unverified
Given the feedback toolbar is visible When the user opens filter options Then three mutually exclusive options are present: "Verified only", "Show all", and "Hide unverified" And selecting "Verified only" shows only entries where agent_verified=true and updates the results count accordingly And selecting "Hide unverified" hides entries where agent_verified=false but retains trust-first ordering among remaining items And selecting "Show all" restores both verified and unverified items with trust-first ordering And filter selection is reflected in the URL/query state and is reversible via Back/Forward navigation And all filter controls are keyboard accessible (tab focusable, Enter/Space toggles) and have ARIA labels
Persisted Sort/Filter Preferences Per User and Listing
Given a user selects "Hide unverified" and changes sort to "Recency" When the user navigates away and returns to the same listing within 30 days on the same account (web or mobile) Then the previously selected filter and sort are automatically reapplied for that listing only And switching to a different listing uses that listing’s own last-saved preferences or defaults if none exist And using "Reset to default" clears the listing-level preferences and re-enables trust-first sort And preferences persist across sessions and devices for the same authenticated user
API Parity: sort_by=trust and filter=verified
Given an API client requests GET /feedback?listing_id={id}&sort_by=trust&filter=verified When the response is returned Then status is 200 and all items have agent_verified=true And items are ordered by created_at desc, then completeness_percent desc, then feedback_id asc And the response metadata includes sort_by=trust and filter=verified And omitting sort_by applies trust-first by default while omitting filter applies all items And requesting sort_by=recency returns items ordered by created_at desc regardless of verification and does not label trust prioritization in metadata And the exports endpoint (e.g., GET /exports/csv?listing_id={id}&sort_by=trust&filter=verified) returns rows in the same order and includes a Verified column with true/false values
Performance, Indexing, and Pagination under Trust Sort
Given a listing with 5,000 feedback records and pagination page_size=50 When requesting the first page with sort_by=trust Then p95 API latency is <= 300ms and p99 <= 500ms in staging with warmed caches And database query uses an index to avoid full table scans (verified/created_at composite or equivalent) And total DB execution time per request is <= 150ms And pagination remains stable across pages due to deterministic tie-breakers (no item appears on two pages or is skipped) And changing sort or filter resets pagination to page 1
Respect User Sort Overrides and Indicator/Tooltip Behavior
Given the user explicitly changes the sort from Trust (default) to Recency When the list refreshes Then no item displays the "Verified prioritized" indicator and the trust info tooltip icon is hidden And this override persists for the listing until the user selects Trust (default) or clicks Reset to default And when Trust (default) is re-selected the indicator returns and a tooltip is available with text: "Verified agents are prioritized to help you act on high-trust feedback first." And the tooltip is accessible (focusable target, aria-describedby, dismissible with Esc, meets AA contrast) And changing filters alone (e.g., Show all to Verified only) does not auto-switch the chosen non-trust sort
Seller Confidence Indicators
"As a home seller, I want to see which feedback comes from verified professionals and how that affects confidence so that I can better interpret what to change or negotiate."
Description

Add a seller-facing confidence module that summarizes feedback quality with counts and percentages of responses from Verified Agents, a simple confidence meter, and an explainer legend for the badge. Provide drill-down to view which comments are verified, with non-sensitive agent identifiers and privacy-preserving display. Include contextual copy that explains why verified feedback is emphasized and any limitations or disclaimers. Ensure the module is available in web seller views and shareable links, inherits the account’s branding, and respects permission scopes.

Acceptance Criteria
Seller Web View and Shareable Link Rendering
Given a seller authenticated in the web seller portal with access to Listing A, when the listing overview loads, then the Seller Confidence module renders above the feedback list with the heading "Feedback confidence". Given a valid public shareable link for Listing A, when an unauthenticated user opens the link, then the Seller Confidence module renders with identical counts/percentages and meter as the seller view. Given a listing with zero feedback responses, when the module renders, then it displays counts=0, percentage=0.0%, a disabled meter state, and a non-intrusive message "No feedback yet". Given the feature flag SellerConfidence is disabled or the viewer lacks scope seller.confidence.read, when the page loads, then the module does not render and no PII is exposed.
Verified Counts, Percentages, and Rounding Rules
Given N total feedback responses and V marked Verified Agent, when the module renders, then it displays Verified=V, Unverified=N−V, and Verified %= round((V/N)*100 to 1 decimal, half-up). Given N=0, when the module renders, then it displays Verified=0, Unverified=0, and Verified %=0.0%. Given feedback data changes (new response added or verification state toggled), when the view refreshes or polling interval elapses, then counts and percentage update within 2 seconds to match the API state. Given the percentage is displayed, then it is formatted with one decimal place and a percent sign (e.g., 66.7%).
Confidence Meter Logic and Accessibility
Given Verified % computed to one decimal, when determining the meter level, then thresholds are: <=33.3 Low, >33.3 and <=66.6 Medium, >66.6 High. When the meter renders, then it displays a textual label (Low/Medium/High), a corresponding color (red/amber/green), and an ARIA label "Feedback confidence: {Level} ({Verified%} verified)". When the meter renders, then color contrast between text/icons and background is >= 4.5:1 and the meter is readable in dark mode and high-contrast mode. When the meter is hovered or focused, then a tooltip appears describing the calculation method and thresholds.
Drill-Down Verified Comments with Privacy-Preserving Identifiers
Given the module is visible, when the user activates "View details" or a verified badge, then a drill-down opens listing comments with a Verified/Unverified marker on each comment. For Verified comments, then each item shows only: approved display name per org policy ("Verified Agent" or first-initial last-name), brokerage name, state, and verification timestamp; no email, phone, license number, or photo is shown. Given an agent has opted out of name display, when the drill-down renders, then their identifier reads "Verified Agent" with brokerage and state only. When sorting/filter toggles are used, then Verified comments are shown first by default; enabling "Show unverified" includes others with a clear "Unverified" marker.
Explainer Legend and Disclaimers
Given the info icon labeled "What does Verified mean?", when activated, then the legend opens and explains: what "Verified Agent" is, how verification works, why verified feedback is emphasized, and limitations/disclaimers (e.g., not all showings are verified; feedback is subjective). When the legend is open, then it displays a last-refreshed timestamp and a link to the privacy policy. When evaluated for readability, then the legend copy scores at or below U.S. 9th-grade reading level. When the legend is dismissed, then focus returns to the triggering control and the open/closed state is preserved for the session.
Branding Inheritance and Contrast Compliance
Given an account with brand settings (logo, primary color, typography), when the module renders, then it applies those tokens consistently to header, meter, and badges; otherwise, system defaults are used. When brand colors are applied, then all text and icons in the module meet WCAG AA contrast (>=4.5:1), auto-adjusting UI shades if necessary without altering stored brand settings. Given a shareable link, when the module renders, then it uses the listing account’s branding and does not expose internal admin themes or unrelated organization branding.
Permission Scopes and Link Access Controls
Given a user without seller.confidence.read or without access to Listing A, when they load the seller portal URL, then the module is not rendered and the page returns 403 or redirects to login per policy. Given a public shareable link token with scope public.seller.read, when opened, then the module renders but suppresses internal-only identifiers per policy while retaining Verified/Unverified markers. Given the organization setting "Hide brokerage on public links" is enabled, when a public link is opened, then brokerage names are omitted while still indicating "Verified Agent". Given a shareable link has been revoked or expired, when it is accessed, then the module does not render and an expired link message (HTTP 410) is shown.
Export Badging & Legends
"As an agent preparing a seller report, I want exports that clearly mark verified feedback and keep my trust-first sort so that I can justify recommendations with visible credibility cues."
Description

Embed Verified Agent indicators into PDF, CSV, and XLSX exports as iconography and/or a boolean column (e.g., Verified=true) with a legend explaining the marker. Preserve trust-first ordering in exports when selected in the UI, and offer an export option to include/exclude unverified feedback. Use vector assets for PDF for print clarity, include alt text for accessible PDFs, and maintain consistent branding. Ensure parity between exported data and on-screen filters/sorts via shared query layer. Validate large export performance and streaming where applicable.

Acceptance Criteria
PDF Export: Verified Badge & Legend
Given a listing has at least three verified and three unverified feedback entries visible in the UI When I export the feedback as PDF with default options Then every verified entry in the PDF displays the Verified Agent icon adjacent to the agent identifier And the PDF contains a single legend that explains the icon as "Verified Agent" and is visible at 100% zoom And no unverified entry displays the Verified Agent icon And the icon is embedded as a vector (no rasterization) and prints crisply at 300 dpi or higher And the icon and legend use brand-approved colors and typography per the design system
CSV/XLSX: Verified Boolean Column & Legend
Given I export the same feedback list to CSV and to XLSX When the files are generated Then both files include a column named "Verified" with boolean values true/false matching the on-screen status for each row And the CSV uses unquoted lowercase true/false values And the XLSX stores the "Verified" cells as Boolean type (not text) And the CSV first line begins with "# Legend: Verified=true indicates Verified Agent" And the XLSX includes a worksheet named "Legend" describing the "Verified" column semantics And both files contain the same number of rows in the same order as the UI
Trust‑First Ordering Preserved in Exports
Given the on-screen list is sorted by Trust First (Verified agents first) When I export to PDF, CSV, and XLSX Then the exported rows appear in exactly the same order as displayed on-screen And all leading rows that are verified in the UI appear first in the exports And when two rows have equal trust ranking, the export preserves the UI’s secondary sort and is stable across repeated exports
Export Option to Include/Exclude Unverified
Given the export dialog is open for feedback When I toggle off "Include unverified feedback" Then only rows with Verified=true are exported to all formats And the exports still include headers and legend sections And the row count in each export equals the Verified count shown in the UI And if no verified rows exist, the export completes successfully with zero data rows and without errors
Accessible PDFs: Alt Text for Badges
Given screen reader software is used to read an exported PDF containing verified entries When focus moves to a verified entry that displays the badge Then the screen reader announces "Verified Agent" for the badge And every instance of the Verified Agent icon in the document has alternate text "Verified Agent" And the PDF passes automated accessibility checks for tagged images with alt text and logical reading order
Parity via Shared Query Layer (Filters & Counts)
Given filter conditions (e.g., date range, listing, source, rating) are applied in the UI When I export using those same filters Then the exported dataset row count matches the UI count exactly And the set of exported records corresponds to the same filter results (no missing or extra rows) And any UI sort (including Trust First) is reflected identically in the export And changing a filter in the UI and re-exporting updates the exported dataset accordingly
Large Export Performance & Streaming
Given a dataset with at least 100,000 feedback rows When I export to CSV Then the download stream begins within 3 seconds of request And the export completes within 180 seconds at p95 without server timeouts And data is transferred using streaming (no full in-memory buffering), keeping server memory usage within 512 MB over baseline When I export the same dataset to XLSX Then the file generation completes within 300 seconds at p95 and memory usage stays within 1 GB over baseline When I export 2,000 rows to PDF Then the file generation completes within 60 seconds and the resulting PDF size is under 25 MB
Role-based Visibility & Controls
"As an account admin, I want granular controls over who sees verification details and default behaviors so that our organization can balance transparency, privacy, and workflow efficiency."
Description

Define permissions determining who can see verification details, trust scores, and override controls: sellers see badges and simple explanations without sensitive metadata; agents and broker-owners can access source and status details; only broker-owners/admins can set policy defaults (e.g., “Verified first” as the organizational default). Provide account-level settings to enable/disable Badge Everywhere, configure default sort, and control export inclusion. Log all policy changes and manual overrides with user, timestamp, and reason for compliance. Gate rollout behind a feature flag for safe deployment and staged enablement.

Acceptance Criteria
Seller-Facing Views Hide Sensitive Metadata
Given a user with role Seller and Badge Everywhere is enabled for the organization When viewing schedules, notifications, listing dashboard, or feedback summaries Then a "Verified Agent" badge is displayed next to verified agent names And the explanation text is plain-language and does not reveal verification source, document type, identifiers, status codes, trust scores, or timestamps And no badge metadata is embedded in seller-facing emails, PDFs, or CSV exports
Authorized Roles See Verification Details
Given a user with role Agent or Broker-Owner When viewing internal schedules or feedback and requesting badge details (click/hover/tap) Then the UI reveals verification provider/source, last verified timestamp, current verification status (verified/pending/expired), and trust score And Given a user with role Seller When attempting to access badge details via UI or API Then the details control is not shown and API requests respond 403 Forbidden
Manual Override Controls Scope and Propagation
Given a user with role Broker-Owner or Admin When accessing a listing’s Badge Everywhere settings Then controls to set sort priority (Verified first/by time/custom) and to apply per-listing overrides are visible and enabled And Given a user with role Agent or Seller When accessing the same settings Then override controls are hidden and any API attempt returns 403 Forbidden And When a manual override is applied or removed by an authorized user Then the resulting order is reflected across schedules, notifications, seller views, and exports within 60 seconds
Organization Default Policy Management
Given a user with role Broker-Owner or Admin When setting the organization default to "Verified first" Then all listings without an explicit per-listing override adopt the default across schedules, notifications, seller views, and exports within 5 minutes And Given a user without these roles When attempting to change the organization default policy Then the UI is read-only and API requests return 403 Forbidden And When the default policy is changed Then the new value is persisted and applied consistently on subsequent loads and exports
Feature Flag and Org-Level Toggle Behavior
Given the global "Badge Everywhere" feature flag is OFF When any user accesses related UI or API Then badge UI is not rendered and endpoints return a FeatureDisabled error without exposing badge data Given the feature flag is ON and the organization setting "Badge Everywhere" is Disabled When users access schedules, notifications, seller views, or exports Then badges and verification-based sorting are suppressed and sorting ignores verification status Given the feature flag is ON and the organization setting is Enabled When users access supported surfaces Then badges and sorting behave per role visibility and policy defaults And When enabling the feature for staged rollout Then the flag supports targeting by environment and by organization allowing enablement for a specified cohort only
Export Inclusion and Metadata Sanitization
Given the organization setting "Include badges in exports" is ON When generating internal exports (agent/broker) Then a badge column and verification-aware sorting are included per policy and role And Given the same setting is ON When generating seller-facing exports Then only badge presence and a plain-language explanation are included without verification source, identifiers, status codes, trust scores, or timestamps And Given the setting is OFF When generating any export Then no badge-related fields or sorting are applied
Audit Logging for Policy Changes and Overrides
Given a policy change or manual override is saved Then an audit entry is created capturing user ID, role, organization ID, affected resource (organization or listing), action type, previous value, new value, UTC timestamp (ISO 8601), and a non-empty reason And When the reason is omitted or blank Then the change is rejected with a validation error and no audit entry is created And Given a user with role Broker-Owner or Admin When accessing the audit log Then they can view and filter entries by date range, user, resource, and action; other roles cannot access the log

Risk Guard

Adaptive risk scoring flags suspicious activity—duplicate numbers, rapid‑fire scans, mismatched roster data—and triggers step‑up verification automatically. Keeps spam out without blocking legitimate buyer agents, preserving data quality and user flow.

Requirements

Real-time Risk Scoring Engine
"As a brokerage admin, I want each scan to receive an instant, explainable risk score so that suspicious activity is intercepted without slowing legitimate showings."
Description

Compute an on-the-fly risk score (0–100) with human-readable reason codes for every QR scan and showing request. Combine multiple signals—contact reuse frequency, scan velocity, device fingerprint, IP reputation, and roster/MLS congruence—using a weighted model with configurable thresholds per brokerage. Enforce a latency budget under 150 ms at P95 so decisions do not slow the showing flow. Publish results to the event bus and enforce policies (allow, step-up verify, deny) instantly. Provide explainability via reason codes and feature values to support auditing and rapid tuning, preserving user flow while filtering spam.

Acceptance Criteria
Real-time Score on QR Scan
Given a valid QR scan event containing contact_phone, device_fingerprint, ip_address, mls_id, listing_id, and brokerage_id When the scoring engine processes the event Then it returns a numeric risk_score between 0 and 100 And it returns a policy_decision in {allow, step_up_verify, deny} And it returns at least one human_readable reason_code And it attaches evaluated feature_values for all signals used in the decision And the synchronous response is produced in the same request context
Latency Budget Compliance
Given a controlled load test of 10,000 QR scan events at 100 requests/second with mixed signal profiles When measuring end-to-end decision latency from ingress to policy_decision response at the service boundary Then the P95 latency is <= 150 ms And the P99 latency is <= 250 ms And the error rate (5xx + timeouts) is <= 0.1%
Brokerage-Configurable Thresholds
Given brokerage A configured with thresholds allow<40, step_up_verify 40-69, deny>=70 And brokerage B configured with thresholds allow<50, step_up_verify 50-79, deny>=80 When both brokerages submit events that resolve to the same risk_score of 65 Then brokerage A receives policy_decision step_up_verify And brokerage B receives policy_decision allow When a threshold change is saved for a brokerage Then new decisions reflect the change within 60 seconds without service restart
Event Bus Publication
Given a decision is produced for an event with correlation_id When the decision is returned Then a message is published to the event bus within 50 ms And the message payload includes id, timestamp_utc, correlation_id, brokerage_id, entity_type, entity_id, risk_score, policy_decision, reason_codes[], feature_values[], model_version And the message conforms to the documented schema and passes validation And duplicate publish attempts preserve idempotency via id
Explainability and Auditability
Given a decision has been made for an event When an internal auditor queries the decision by correlation_id Then the response includes the same risk_score, policy_decision, reason_codes[], and feature_values[] used at decision time And reason_codes are human-readable from a controlled vocabulary And feature_values include raw and normalized values where applicable And the explanation record is retained and retrievable for at least 90 days
Signal Integration and Reason Codes
Given test events crafted to trigger each signal individually and in combination When the scoring engine processes these events Then contact reuse frequency above configured threshold emits reason_code high_contact_reuse And scan velocity exceeding configured threshold emits reason_code high_scan_velocity And device fingerprint reuse across distinct identities emits reason_code device_fingerprint_match And low IP reputation emits reason_code low_ip_reputation And roster/MLS incongruence emits reason_code roster_mismatch And combined signals produce an aggregate risk_score reflecting configured weights
Policy Enforcement on Showing Request
Given a showing request is submitted via the app by a buyer agent When the risk_score result is allow Then the request proceeds with no additional friction When the result is step_up_verify Then SMS or equivalent verification is triggered immediately and the request proceeds only upon successful verification When the result is deny Then the request is blocked and a non-intrusive, generic error is returned without exposing sensitive risk details
Multi-Signal Data Ingestion & Normalization
"As a compliance-focused broker-owner, I want Risk Guard to draw from all relevant signals while safeguarding PII so that risk judgments are accurate and privacy obligations are met."
Description

Ingest and normalize all inputs required for risk evaluation: phone numbers (E.164), emails, device fingerprints, IP addresses, QR scan timestamps, listing context, and brokerage roster/MLS records. Enrich contacts with carrier/type and disposable-email indicators; geo-resolve IPs and detect proxies/VPNs. Standardize schemas, deduplicate records, and validate formats prior to scoring. Apply PII safeguards (encryption at rest/in transit, scoped access, retention policies) and consent tracking. Expose a versioned internal API and event stream for scoring, analytics, and audit subsystems.

Acceptance Criteria
Phone Number Normalization, Enrichment, and Deduplication
- Given raw phone numbers from showings, rosters, and QR scans When they are ingested Then each number is validated and normalized to E.164; invalid inputs are rejected with reason codes (e.g., invalid country code, too short/long) - Given a valid number lacking an explicit country When listing context provides country Then the number is normalized using that country; if ambiguous, record is flagged for review and not used for scoring - Given multiple records with the same normalized number from any source When deduplication runs Then a single canonical contact-number entity is stored with all source references preserved - Given a normalized number When enrichment runs Then carrier name and line type (mobile, landline, VoIP, toll-free) are added with p95 lookup latency <= 500 ms and error rate < 1% - Given any ingestion error When it occurs Then it is logged with trace ID and retryable errors are retried up to 3 times with exponential backoff; non-retryable errors are quarantined - Given the scoring subsystem When it requests a number by contact ID Then the normalized+enriched record is returned with 99.9% monthly availability
Email Validation, Normalization, and Disposable Detection
- Given raw emails from QR scans and rosters When ingested Then addresses are trimmed, Unicode-NFC normalized, and domains lowercased and punycoded; inputs failing RFC 5322 syntax are rejected with reason codes - Given an email domain When MX or A DNS records are missing Then the address is flagged invalid; DNS lookups use cache with TTL <= 24h; p95 lookup latency <= 300 ms - Given two emails that are equal after normalization (domain case-insensitive, Unicode-normalized) When deduplication runs Then only one canonical email record is stored with merged source references - Given an email domain on the disposable/provider blacklist updated daily When ingested Then disposable_email = true is set; detection precision >= 98% against test corpus - Given ingestion under load of 100 RPS When processing Then p95 end-to-end normalization latency per email <= 50 ms
IP Geo-Resolution and Proxy/VPN Detection
- Given IPv4 or IPv6 addresses from client events When ingested Then format is validated; private/reserved ranges are tagged and excluded from geolocation - Given a routable IP When geolocation runs Then country, region, city, and timezone are resolved using the current geo DB; p95 lookup latency <= 20 ms - Given an IP When proxy/VPN detection runs Then proxy_vpn_flag and reason (hosting ASN, known exit node, anonymizer) are set; detection TTL = 24h and refreshed on access - Given updates to the geo/detection databases When applied Then changes are versioned and auditable; rollbacks complete within 10 minutes - Given scoring When requesting IP context Then the most recent resolution within 24h is returned or explicitly marked stale
Event Signals Ingestion: QR Scans and Device Fingerprints
- Given a QR scan event from any device When received Then an event ID is generated, timestamp captured in UTC ISO-8601, and clock skew vs server kept <= 100 ms via NTP - Given duplicate delivery of the same event ID When processed Then ingestion is idempotent and no duplicate records are created - Given multiple events for the same device and listing When stored Then events are ordered by timestamp and queryable in order; p95 ingest-to-availability latency <= 2 s - Given device attributes (UA, canvas, IP, etc.) When fingerprinting Then a salted SHA-256 fingerprint is produced; raw high-entropy attributes are not persisted; fingerprint TTL extends on activity up to 30 days - Given a 1M-event load test When executed Then fingerprint collision rate <= 0.5% and ingestion error rate <= 0.1%
Roster/MLS Data Ingestion and Listing Context Linking
- Given broker roster and MLS feeds When fetched Then records are parsed into the standard schema (agent_id/licence_id, name, brokerage, email, phone, listing_id) with schema version recorded - Given multiple roster entries for the same agent When deduplication runs Then one canonical agent is kept using licence_id as primary key and email/phone as secondary keys - Given a QR scan claiming an agent identity When cross-checked Then mismatches to roster (unknown agent, broker mismatch, expired licence) are flagged with reason codes within 1 s p95 - Given daily feed updates When applied Then upserts and soft-deletes are processed; stale records older than 30 days are archived - Given failures during feed ingestion When they occur Then processing halts for the offending batch, errors are logged with trace IDs, and reprocessing is supported via replay
PII Safeguards, Consent Tracking, and Retention
- Given any PII at rest When stored Then it is encrypted with AES-256-GCM via managed KMS; keys are rotated at least every 90 days; in transit uses TLS 1.2+ with modern ciphers - Given access to PII When requested Then RBAC enforces least privilege; access is logged with user/service ID, purpose, and timestamp; access logs are retained for 1 year - Given consent collection events When captured Then consent type, scope, source, and timestamp are stored per contact; processing that requires consent is blocked if consent is missing or revoked - Given data retention policies When applied Then raw identifiers (phone, email, IP) are retained for 365 days from last activity or 90 days post-listing close, whichever is sooner; purges complete within 7 days of expiry and are auditable - Given a verified data deletion request When received Then all related PII is deleted or irreversibly pseudonymized within 30 days; completion is recorded and exportable
Versioned Internal API and Event Stream Exposure
- Given internal consumers (scoring, analytics, audit) When calling the API Then endpoints are versioned (e.g., /v1), JSON schemas are published, and backward compatibility is maintained for one minor version; contract tests pass - Given normal load of 100 RPS with 2x burst for 60 s When handled Then API p95 latency <= 200 ms and uptime >= 99.9% monthly; 429s include Retry-After headers - Given authentication requirements When enforced Then requests use mTLS or signed service tokens with scoped permissions; unauthorized requests are rejected with 401/403 and audited - Given event publishing When an ingest completes Then a message is emitted to risk.ingest.v1 with at-least-once delivery, partitioned by contact_id; p95 publish latency <= 1 s; DLQ exists for failures - Given observability When operating Then request tracing, correlation IDs, and metrics (QPS, error rate, p95 latency) are emitted; on-call dashboards and alerts exist for SLO breaches
Duplicate Contact Detection
"As a listing agent, I want the system to flag repeated phone or email use across my listings so that I can avoid spam and focus follow-up on qualified buyer agents."
Description

Detect and score repeated use of the same or similar phone numbers/emails across showings and listings within configurable time windows. Use exact and fuzzy matching (levenshtein, normalization of vanity numbers, common email aliasing) to link related identities. Elevate risk when duplicates occur across multiple listings or offices, and downgrade when on approved partner rosters. Emit reason codes (e.g., DUPLICATE_NUMBER_24H) and expose counts/timestamps for downstream policies and review.

Acceptance Criteria
Exact Duplicate Phone Within 24 Hours
Given a showing request includes phone "+1 (415) 555-0123" and an existing contact with phone "+14155550123" exists on any listing in the last 24 hours (based on event_timestamp) When the detection service normalizes numbers to E.164 and processes the new event Then it emits reason_code="DUPLICATE_NUMBER_24H" And increments metrics.duplicate_phone.count by 1 and sets metrics.duplicate_phone.last_seen to the new event_timestamp And increases risk_score by duplicate_phone.delta (default 15) And writes an audit event containing contact_id, listing_id, normalized_phone, reason_code, event_timestamp
Fuzzy Match: Vanity and Edit Distance Phone Detection (7 Days)
Given a new event with phone "1-800-FLOWERS" and a historical phone "18003569377" within the last 7 days, or two normalized numbers with Levenshtein distance <= 1 after E.164 normalization When detection runs with fuzzy_phone.enabled=true Then it links the two under the same contact_identity And emits reason_code="DUPLICATE_PHONE_FUZZY_7D" And increases risk_score by fuzzy_phone.delta (default 10) And updates metrics.duplicate_phone_fuzzy.count and metrics.duplicate_phone_fuzzy.last_seen
Email Alias Normalization Across Listings (30 Days)
Given two showing requests within the last 30 days with emails "alice.smith+tour@gmail.com" and "alicesmith@gmail.com" and provider gmail.com is in dot_insensitive_providers When normalization removes plus-tags and dots per provider rules and lowercases the local-part and domain Then the emails are treated as the same canonical_email And reason_code="DUPLICATE_EMAIL_ALIAS_30D" is emitted And risk_score increases by duplicate_email.delta (default 12) And metrics.duplicate_email.count increments and metrics.duplicate_email.last_seen updates
Cross-Listing and Cross-Office Duplicate Elevation
Given duplicate contact events exist on at least 2 distinct listing_ids and at least 2 distinct office_ids within the last 14 days When the latest event is processed Then it emits reason_code="DUPLICATE_MULTI_OFFICE_14D" And increases risk_score by cross_office.delta (default 20) in addition to any base duplicate deltas And includes metadata.listing_ids_count >= 2 and metadata.office_ids_count >= 2 in the audit event
Approved Partner Roster Downgrade
Given the contact identity matches an entry on the approved_partner_roster for the relevant office at processing time When a duplicate phone or email reason would be emitted Then the system still emits the duplicate reason_code and also emits "ROSTER_DOWNGRADE_APPLIED" And the net risk_score delta for this event is reduced to 0 And the audit event includes roster_source, roster_id, and roster_version
API Exposure of Duplicate Metrics and Reason Codes
Given a client calls GET /risk/events?contact_id={id}&time_window=30d When duplicate events exist for that contact within the time window Then the response status is 200 and p95 latency <= 300 ms for payloads <= 100 events And response.body.reason_codes is an array of objects with fields code (string), count (integer), last_seen (ISO-8601 UTC) And response.body includes normalized identifiers (canonical_email, normalized_phone) when available And response schema matches the published OpenAPI specification version in X-Schema-Version header
Configurable Time Window Overrides Take Effect Without Downtime
Given an admin updates duplicate_phone.window_hours from 24 to 12 via the configuration service When a new event is processed after the change is saved Then duplicate detection uses a 12-hour window And the emitted reason_code is "DUPLICATE_NUMBER_12H" And events older than 12 hours are excluded from duplicate counts And the change is applied within 5 minutes and is captured in the configuration audit log
Rapid-Scan Rate Limiting
"As a listing agent, I want rapid-fire scans automatically throttled so that automated spam cannot disrupt scheduling while real buyers proceed smoothly."
Description

Identify rapid-fire QR scans and request bursts from the same device, IP, or subnet using sliding-window counters and anomaly detection. Apply adaptive throttles and lightweight challenges (e.g., CAPTCHA) when velocity exceeds policy thresholds, with per-listing and per-office controls. Ensure legitimate group tours are not blocked by allowing short grace windows and pooling logic for shared networks. Surface reason codes and metrics for tuning.

Acceptance Criteria
Sliding-Window Rate Detection per Device/IP/Subnet
Given a device fingerprint sends more than 6 scan or verification requests for the same listing within a sliding window of 10 seconds, when the 7th request arrives, then the system returns HTTP 429 and rate-limits that device for 60 seconds. Given an IP address sends more than 25 requests across any listings within a sliding window of 60 seconds, when the 26th request arrives, then the system returns HTTP 429 and rate-limits that IP for 120 seconds. Given a /24 subnet sends more than 150 requests across any listings within a sliding window of 60 seconds, when the 151st request arrives, then the system returns HTTP 429 and rate-limits that subnet for 300 seconds. Given requests are distributed around a minute boundary, when totals across any 60-second sliding window exceed thresholds, then enforcement still occurs without fixed-bucket bypass.
Adaptive Throttling and Lightweight Challenge
Given any rate limit is triggered for a device, when the device presents a valid CAPTCHA response within 2 minutes, then the device receives a temporary elevated quota of 10 requests per minute for 15 minutes for that listing. Given a device fails CAPTCHA 3 times within 5 minutes, when further requests arrive, then the system denies with HTTP 429 and Retry-After of 300 seconds. Given a device completes CAPTCHA successfully, when subsequent requests remain under 10 rpm for 15 minutes, then no further challenges are presented. Given a device is under active deny, when the deny window elapses, then the next request is evaluated fresh against current counters and anomaly state.
Group Tour Grace Window and Shared-Network Pooling
Given 5 or more unique device fingerprints from the same IP scan the same listing within 5 minutes, when their combined request rate is ≤100 requests per 5 minutes, then no throttles or challenges are issued to that IP for 10 minutes. Given an individual device new to a listing, when it submits its first 2 scan or verification requests within 30 seconds, then those requests are exempt from rate limiting. Given the pooled group rate for the IP exceeds 100 requests per 5 minutes during the 10-minute grace period, when the next request arrives, then adaptive challenge is triggered for devices exceeding 6 requests per 10 seconds and IP-level throttling is applied.
Per-Listing and Per-Office Policy Controls
Given an admin sets per-listing thresholds via the policy API, when saved, then the new thresholds override office and global defaults for that listing within 2 minutes. Given an admin sets per-office thresholds, when saved, then the new thresholds apply to all listings in that office within 2 minutes unless a listing override exists. Given conflicting policies exist, when evaluating rate limits, then precedence is listing > office > global and the effective policyId is returned in enforcement logs. Given a policy change is made, when auditing settings, then an immutable audit entry with actor, timestamp (UTC), old value, and new value is recorded.
Reason Codes and Metrics Surfacing
Given any throttle or challenge decision occurs, when logging, then a reasonCode in {RATE_DEVICE, RATE_IP, RATE_SUBNET, ANOMALY_SPIKE, CHALLENGE_PASS, CHALLENGE_FAIL} and policyId are emitted to structured logs. Given decisions are emitted, when viewing the Risk Guard dashboard, then per-listing and per-office daily counts of throttles, challenges, pass rate, and appeal outcomes are visible with 95% of events appearing within 60 seconds. Given metrics are recorded, when exporting via API, then CSV and JSON exports include timestamp, listingId, officeId, networkIdentity, reasonCode, action, and duration.
Anomaly Detection Velocity Spike
Given a device/IP/listing tuple has a 7-day baseline 95th percentile rate R (requests per 30 seconds) with at least 200 events in the baseline, when the current 30-second rate exceeds max(3×R, 20 events) and static thresholds have not fired, then an ANOMALY_SPIKE challenge is issued. Given a tuple lacks sufficient baseline volume (fewer than 200 events), when evaluating anomaly detection, then anomaly detection is skipped and only static thresholds apply. Given an anomaly challenge is issued and passed, when subsequent requests occur, then the device receives a 15-minute anomaly allowlist unless static thresholds are exceeded.
Client/API Behavior and Accessibility Under Throttle
Given a request is throttled, when responding, then HTTP 429 includes headers Retry-After (seconds) and X-RateLimit-Reason, and a JSON body with {reasonCode, policyId, retryAfterSeconds, challengeUrl}. Given a CAPTCHA is presented, when used via web or mobile, then the challenge is keyboard navigable and provides audio alternatives meeting WCAG 2.1 AA. Given the rate-limit decision service is invoked, when under normal load (p95), then decision latency is ≤50 ms and challenge verification p95 is ≤1.5 seconds. Given a client passes a challenge, when within the allow period, then server responses include X-RiskGuard-Status: allowlisted and no CAPTCHA is re-presented unless static thresholds are breached.
Roster Mismatch Verification
"As a broker-owner, I want mismatches between claimed identity and roster/MLS data flagged automatically so that only legitimate buyer agents can book showings."
Description

Cross-check claimed buyer-agent identity against brokerage roster and MLS/association records at scan time. Validate license status, office affiliation, and primary contact details; detect mismatches or stale data and increase risk accordingly. Support partial matches and confidence scoring to minimize false positives. Provide configurable policies to auto-allow exact matches, step-up verify partial matches, and deny known bad actors.

Acceptance Criteria
Exact Match Auto-Allow at Scan Time
Given a QR scan with claimed agent identity And current roster and MLS records are available When the system compares license number, full legal name, brokerage affiliation, and primary phone or email And all fields exactly match And MLS license status = Active Then confidence_score >= 0.95 And risk_score <= 0.20 And action = Allow (no step-up) And decision_time_p95 <= 1000 ms And an audit log entry records matched fields, source timestamps, confidence_score, risk_score, and action
Partial Match Triggers Step-Up Verification
Given a QR scan with claimed agent identity And records show matching legal name and brokerage but a mismatch in phone or email, or a normalized name alias match When the system computes confidence_score And 0.60 <= confidence_score < 0.95 And MLS license status = Active Then action = Step-Up Verification via OTP to roster/MLS-verified contact And access remains blocked until verification succeeds And on successful verification within 10 minutes, action = Allow and risk_score <= 0.40 And on 3 failed attempts or timeout, action = Deny with reason = "identity_unverified" And audit log includes OTP channel, attempts, and outcome
Known Bad Actor or Inactive License Denial
Given a QR scan with claimed agent identity When the identity matches a watchlist entry by license number, MLS ID, or hashed phone Or MLS/association license status ∈ {Suspended, Revoked, Expired} Then action = Deny And risk_score >= 0.90 And decision_time_p95 <= 500 ms And an admin alert is emitted with watchlist_id and reason And an audit log entry is created
Real-Time License Status Check with Cache Fallback
Given MLS/association license APIs are reachable When a scan is evaluated Then license status is fetched in real time or from cache_age_hours <= 24 And if API is unavailable, fall back to cached status where cache_age_hours <= 24 and set decision_reason = "stale_license_check" And if no cached status is available, action = Step-Up Verification And all external calls timeout at 1500 ms with graceful degradation
Configurable Policy Thresholds and Actions
Given an org admin updates policy thresholds and actions When changes are saved Then per-tenant settings for confidence_score thresholds and actions are persisted: auto_allow >= 0.95, step_up in [0.60, 0.95), deny < 0.60 by default And changes take effect for new scans within 60 seconds without deployment And settings changes are versioned with actor_id, old_value, new_value, and timestamp And reverting to defaults is available and takes effect within 60 seconds
Stale or Missing Roster Data Handling
Given roster or MLS data source timestamp_age_hours > 24 for the claimed brokerage or agent When a scan is evaluated Then data_stale = true is recorded And if confidence_score >= 0.60 and license status unknown, action = Step-Up Verification And if confidence_score < 0.40, action = Deny with reason = "low_confidence_stale_data" And a background sync for the affected source is queued within 60 seconds And audit log includes source, timestamp_age_hours, and queued_sync_id
Step-up Verification Orchestration
"As a listing agent, I want risky scans to trigger the right verification steps automatically so that legitimate buyer agents pass quickly and bad actors are stopped without manual vetting."
Description

Automatically select and execute verification methods based on risk band, context, and brokerage policy, including SMS OTP, voice callback, email magic link, and MLS SSO where available. Manage retries, timeouts, fallbacks, and rate limits; bind successful verifications to device profiles for future trust. Expose a status webhook to the scheduling flow to unblock upon success and route to manual review after repeated failures. Track completion times and abandonment to minimize friction for low-risk users while blocking high-risk attempts.

Acceptance Criteria
Risk-Band Method Selection
Given a verification request with risk_band in {low, medium, high, critical}, available_methods, device_trust state, and brokerage_policy When the orchestrator selects a verification method Then - If risk_band=low AND brokerage_policy.allows_bypass=true AND device_trust=trusted within 30 days, return decision "no_verification_required" within 100 ms - Else if MLS SSO is available AND brokerage_policy.preferred_method="mls_sso", select mls_sso - Else if risk_band in {medium, high} AND sms is available, select sms_otp - Else if risk_band in {high, critical} AND voice is available, select voice_callback - Else if email is available, select email_magic_link - Else return decision "manual_review" - Emit method_selected event with correlation_id and selected_method
Retry, Timeout, and Fallback Orchestration
Given a selected primary verification method for a correlation_id When delivery fails or the user does not complete the challenge Then - For sms_otp: max_attempts=3 per flow; otp_length=6 digits; otp_ttl=5 minutes; inter_attempt_interval>=30 seconds; on 2 consecutive failures or undeliverable, fallback to voice_callback if available - For voice_callback: max_attempts=2 per flow; answer_timeout=30 seconds; on voicemail/no answer twice, fallback to email_magic_link if available - For email_magic_link: issue 1 link per flow; link_ttl=15 minutes; one-time use; if link expired or bounced, route to manual_review - Total combined challenge attempts across methods per flow <=5; on exceed, emit routed_to_manual and end flow
MLS SSO Precedence by Brokerage Policy
Given MLS SSO is configured for the listing MLS and the user's roster record is found When brokerage_policy.preferred_method="mls_sso" Then - Present MLS SSO as the first and only prompt - Initiate SSO with a 90-second session timeout and correlation_id - On successful assertion with matching license_id, emit success and mark verification complete with assurance_level="high" - On SSO cancel/error, resume the fallback chain at the next eligible method without resetting prior attempt counters - Record SSO outcome and timestamps in audit log
Status Webhook and Unblock Semantics
Given the orchestrator processes a verification flow for correlation_id When state changes occur Then - Send POST to the configured webhook within 1 second per event with at-least-once delivery and exponential retry for up to 24 hours on 5xx - Include: event_type in {verification_started, method_selected, challenge_sent, success, failure, locked_out, routed_to_manual, abandoned}, correlation_id, method, risk_band, attempt_count, occurred_at (ISO8601), and HMAC-SHA256 signature header - On success, set unblock=true to allow scheduling to proceed immediately - On routed_to_manual or locked_out, set unblock=false and route to manual review queue - Ensure idempotency via X-Event-Id; duplicate deliveries must be safely ignored by the consumer
Device Trust Binding and Reuse
Given a successful verification and a derivable device_id When persisting outcome Then - Bind device_id to agent_identity with trust_ttl=30 days and store last_verified_at - On subsequent requests from the same device_id and agent_identity within ttl and risk_band in {low, medium}, bypass verification and emit success with reason="trusted_device" - For risk_band in {high, critical}, require full verification regardless of trust - Provide an endpoint to revoke device trust; post-revocation, require step-up on next request - Persist only hashed phone/email identifiers and device fingerprint; no plaintext secrets
Rate Limiting and Abuse Controls
Given inbound challenge sends across phone_number, device_id, and ip_address When evaluating limits Then - Enforce: phone_number<=5 OTP sends per 60 minutes; device_id<=3 OTP sends per 15 minutes; ip_address<=10 OTP sends per 15 minutes; on violation, return 429 and emit locked_out - Enforce inter_send_interval>=30 seconds per flow - If sms/voice are rate-limited, still offer MLS SSO where available - If duplicate phone_number is used across >3 identities in 24 hours, escalate risk_band to high for subsequent attempts - If >5 QR scans from the same device_id occur within 2 minutes for the same listing, require step-up regardless of current band
Completion Time and Abandonment Tracking
Given a verification flow begins for correlation_id When capturing operational metrics Then - Record start_at, first_challenge_at, first_success_at, end_at, method, and risk_band - Mark abandoned if no user interaction occurs for 120 seconds after the last challenge; emit abandoned event - Achieve median time_to_verify<=45 seconds for medium-risk sms_otp flows over trailing 7 days; alert if exceeded for 2 consecutive days - Ensure P95 time_to_unblock<=100 ms for low-risk bypass decisions - Persist daily aggregates by method and risk_band for analytics
Admin Risk Console & Audit Trail
"As a brokerage admin, I want a centralized view and audit history of risk decisions so that I can resolve false positives, fine-tune policies, and meet compliance obligations."
Description

Provide a role-based console for brokers and admins to review flagged events, view reason codes and feature values, approve/deny or whitelist entities, and export incidents. Maintain an immutable audit trail of inputs, scores, decisions, and verifications for at least 18 months with search and filter by listing, office, user, and time range. Include alerting for spikes in risk events and tools to adjust thresholds safely with preview/simulation before deployment.

Acceptance Criteria
Role-Based Access to Risk Console
Given an authenticated user with role Admin or Broker-Owner When they navigate to /risk-console Then the console loads and displays flagged events and sensitive fields permitted for their role Given an authenticated user without Admin or Broker-Owner role When they navigate to /risk-console or call related APIs Then they receive HTTP 403 and no event data is returned, and the access attempt is logged in the audit trail
Review Flagged Events with Reason Codes
Given flagged risk events exist for the last 7 days When the console loads with default filters Then a paginated table shows for each event: timestamp (UTC), listing ID, office ID, agent/user identifier, risk score, reason codes, and key feature values used in scoring And sorting by timestamp and risk score is available and functional And the first page renders within 2 seconds for up to 5,000 events in range
Approve, Deny, or Whitelist Entities from Event Detail
Given an admin opens an event detail When they choose Approve or Deny, enter a required note (min 10 chars), and confirm Then the decision is recorded with user, timestamp, note, and event ID in the immutable audit trail, and the event status updates within 2 seconds Given an admin selects Whitelist with scope (listing, office, or global) and duration (e.g., 30/90/180 days) When they confirm Then future events from that entity within scope bypass risk gating and are labeled Whitelisted, and the whitelist entry with scope and expiry is auditable and revocable
Export Filtered Risk Incidents
Given filters are set (time range, listing(s), office(s), user(s), min score, reason code) When the user clicks Export CSV Then a CSV is generated within 30 seconds containing only rows that match the filters, with a header row and UTC timestamps, and includes fields: event ID, timestamp, listing, office, user, score, reason codes, feature values, decision, decisionedBy And the export action (filters, row count, requester) is written to the audit trail
Immutable Audit Trail with 18-Month Retention
Given any scoring event or admin action occurs When it is persisted Then the audit record contains: unique ID, createdAt (UTC), actor/service, payload summary, SHA-256 hash, and previousHash pointer, and cannot be updated or deleted via public APIs And records are queryable by listing, office, user, time range, event/decision ID And all records are retained for at least 18 months; purges beyond retention are logged with summary counts and hash of purged set And retrieving a record by ID returns the same hash as originally written
Spike Alerting for Risk Events
Given spike alerting is configured with thresholds (e.g., ≥50 events/hour or ≥3x 7-day average) and channels (email, Slack) When the spike condition is met within a rolling window Then an alert is sent to configured channels within 2 minutes containing period, counts, delta vs baseline, top reason codes, and affected listings/offices And duplicate alerts for the same spike window are suppressed, with a single recovery notice when the metric returns to baseline And all alerts and suppressions are recorded in the audit trail
Threshold Adjustment with Simulation and Safe Deployment
Given an admin has permission to manage thresholds When they propose new thresholds and click Simulate (last 30 days) Then the system returns predicted metrics within 30 seconds: flagged rate, step-up triggers, estimated false-positive proxy, and impact by listing/office segment, without changing production behavior When the admin clicks Deploy and confirms the summary Then the new thresholds take effect within 1 minute, a changelog entry is recorded (before/after, user, timestamp, rationale), and an automatic rollback point is created And clicking Rollback restores the prior configuration within 60 seconds and records the action in the audit trail

Policy Profiles

Office-level controls define verification requirements by listing, team, or event type (open house vs. private tour). Built‑in override requests capture reason, approver, and duration, maintaining flexibility with a clean, reviewable audit trail.

Requirements

Policy Profile Model & Scoping
"As a broker-owner, I want to define standardized verification policies tailored by listing, team, and event type so that the right controls apply automatically without per-showing setup."
Description

Define a versioned data model for Policy Profiles with scoped application by office, team, listing, and event type (e.g., open house vs. private tour). Support configurable verification checks (e.g., ID verification, agent license validation, buyer pre‑approval, NDA, two‑factor), thresholds, and effective dates. Implement inheritance and conflict resolution precedence (event > listing > team > office default). Include cross‑office compatibility, default fallback profiles, cloning, and change history to ensure predictable, reproducible governance across all showings.

Acceptance Criteria
Versioned profile creation with effective dating and history
Given an office admin drafts a Policy Profile named "Office Default" with checks configured and effective_start=2025-09-01T00:00:00Z and no effective_end When the profile is saved Then the system assigns version "1.0", persists an immutable snapshot with a unique profile_id And a change_history entry is recorded with action=create, actor_id, UTC timestamp, and field diff And attempting to modify fields on an immutable version is rejected with HTTP 409 and error_code=PP-IMM-001 And saving a version whose effective window overlaps an existing version for the same scope and event_type is rejected with HTTP 409 and error_code=PP-EDATE-001 And querying the profile as_of any timestamp returns at most one active version
Inheritance precedence resolution across event, listing, team, and office
Given profiles exist: office default (scope=office O, event_type=any), team profile (scope=team T under O), listing profile (scope=listing L under T), and event-type profile (scope=listing L, event_type=open_house) When resolving the applicable policy for showing listing=L, event_type=open_house, within effective windows Then the selected profile is the event-type profile for L When resolving for listing=L, event_type=private_tour with no event-type profile but listing-level profile exists Then the selected profile is the listing-level profile When resolving for listing=L2 under team T with no listing-level profile Then the selected profile is the team-level profile When resolving for a listing under office O with no team- or listing-level profile Then the selected profile is the office default And the resolver returns provenance: selected_scope, selected_profile_id, selected_version
Configurable verification checks and thresholds per event type
Given a profile defines for event_type=private_tour: id_verification=required, agent_license=required, buyer_preapproval={type=percent, threshold=90}, nda=optional, two_factor=required And the same profile defines for event_type=open_house: id_verification=optional, agent_license=required, buyer_preapproval=not_applicable, nda=optional, two_factor=optional When fetching the policy for event_type=private_tour Then the API returns required_checks matching the configuration including threshold type and value And attempting to set buyer_preapproval threshold outside 0..100 percent is rejected with HTTP 422 and error_code=PP-THRESH-001 And unknown check keys are rejected with HTTP 422 and error_code=PP-CHECK-002 And per-event_type overrides must be explicitly defined; absent definitions inherit from the broader scope
As-of effective policy resolution by showtime
Given profile versions v1 with effective_start=2025-09-01T00:00:00Z and effective_end=2025-10-01T00:00:00Z, and v2 with effective_start=2025-10-01T00:00:00Z and no effective_end When resolving policy for showtime=2025-09-15T12:00:00Z Then v1 is selected When resolving policy for showtime=2025-10-05T12:00:00Z Then v2 is selected And all comparisons are performed in UTC with ISO-8601 timestamps If no version is effective at the requested showtime, then the resolver selects the default fallback profile and sets provenance.reason="NO_ACTIVE_VERSION" If two versions overlap due to data error, the resolver selects the version with the latest effective_start prior to showtime and emits warning_code=PP-EDATE-OVERLAP
Cross-office compatibility for visiting agents and external validations
Given listing L belongs to Office A and the requesting agent belongs to Office B And Office A has a listing-level profile requiring agent_license=required and id_verification=required When resolving the applicable policy for a showing on L by the Office B agent Then the base profile is derived from Office A's scoping per precedence rules And the agent_license validation check uses standardized check key "agent_license" and queries the appropriate licensing authority for the agent's jurisdiction And only library-defined check keys are allowed; attempting to reference an office-specific custom key is rejected with HTTP 422 and error_code=PP-XOFF-001 And the resolver MUST NOT select any profile from Office B to supersede Office A's listing policy
Clone policy profile to new scope with provenance
Given an existing profile P scoped to office O with version "1.0" When a user selects Clone, sets target_scope=team T, and name="Team T Default" Then a new profile P' is created with profile_id != P.profile_id, version "1.0", and checks identical to P And change_history for P' contains entry action=clone, cloned_from=P.profile_id, actor_id, and UTC timestamp And P' has no effective_start until explicitly set; attempting to apply P' without an effective_start returns HTTP 409 and error_code=PP-EFF-REQ And saving P' returns HTTP 201 with a Location header referencing P'
Default fallback profile selection and auditability
Given no event-, listing-, team-, or office-level profiles exist for listing L at showtime S When resolving the applicable policy Then the system selects the system-default fallback profile D And the response includes provenance.selected_scope="system_default", selected_profile_id=D.profile_id, and selected_version And an audit log entry is recorded with reason="FALLBACK_NO_PROFILE" and a UTC timestamp
Real-time Scheduling & Check-in Enforcement
"As a listing agent, I want policy checks to run automatically during scheduling and door check-in so that only verified visitors proceed without me manually policing each step."
Description

Embed a policy evaluation layer into booking flows and QR check-in to dynamically determine the active profile and enforce required steps before confirmation or entry. Provide clear guidance and remediation (e.g., prompt for missing pre‑approval upload) and support branching by event type. Handle edge cases such as co‑listed properties, reassigned teams, and offline QR fallback, while logging each decision for traceability. Ensure low-latency evaluation with graceful degradation and localized messaging.

Acceptance Criteria
Dynamic Policy Selection in Booking Flow
Given a booking request contains listingId, teamId, eventType, and locale When the user starts a booking or changes eventType or teamId Then the system selects the active policy profile based on office-level rules and effective date And returns the required verification steps with stepId, type, mandatory flag, and rationale codes And disables confirmation until all mandatory steps are satisfied And re-evaluates the active profile immediately after each step completion or input change, updating required steps accordingly
QR Check-in Enforcement and Offline Fallback
Given a visitor scans a door-hanger QR during an event When policy evaluation succeeds online Then the system enforces mandatory steps and only issues a check-in token upon compliance And the token encodes attendeeId, eventId, and a policyVersion hash When the device is offline or the policy service exceeds a 1.5s timeout Then the system uses a cached last-known policy for the listing; if none exists, applies the listing’s configured fallback policy And issues a provisional check-in token flagged provisional=true and queues any unmet steps for later remediation And writes a local cache entry to sync within 60 seconds of connectivity restoration
Event-Type Branching with Remediation and Localized Guidance
Given an open house event type Then the required steps include at minimum contact verification and host acknowledgment; pre-approval is optional unless explicitly configured Given a private tour event type Then the required steps include government ID and proof of funds/pre-approval unless an approved override is active When a required artifact is missing Then the UI presents a remediation task (e.g., upload pre-approval) with clear acceptance requirements and progress state And after successful submission and validation, the policy re-evaluates and allows confirmation or entry And all user-facing messages render in the user’s locale with a fallback to English
Overrides with Reason, Approver, and Duration
Given a host initiates an override for a specific attendee, listing, or event When the approver with appropriate role reviews the request containing reason, scope, and duration Then the system enforces office caps on duration and scope and prevents self-approval if restricted And upon approval, the specified steps are bypassed only within the approved duration and scope And upon expiry or revoke, full enforcement resumes automatically And the override record stores requesterId, approverId, reason, scope, duration, timestamps, and outcome for audit
Co-Listed Properties and Team Reassignment
Given a listing with multiple associated teams or agents with different policy profiles When a booking or check-in evaluation occurs Then the system resolves conflicts using configured precedence; if none, the most restrictive profile is applied And the selected profile is consistent across booking and check-in for the same attendee and event When a team reassignment occurs after booking creation Then the system re-evaluates the booking prior to the event, notifies affected parties of any new requirements, and blocks entry until new mandatory steps are met
Decision Logging and Traceability
Given any policy evaluation or override decision When the decision is computed Then the system writes an immutable audit event containing correlationId, listingId, teamId, eventType, attendeeId (if available), inputs, rules fired, selected profileId, required steps, outcome, and evaluator latency And audit events are queryable by listingId, eventId, attendeeId, and time range within 5 minutes of occurrence And document contents are not stored in audit; only metadata and secure file references are logged
Performance Targets and Graceful Degradation
Given a normal operating environment Then policy evaluation completes within 250 ms p95 and 500 ms p99 for booking, and within 150 ms p95 and 300 ms p99 for QR check-in, per region When the policy service misses the p95 target for 5 consecutive minutes or any single request exceeds a 1.5 s timeout Then the system applies the configured fallback policy, marks the session as degraded in telemetry, informs the user of limited checks, and continues the flow And all degradations and recoveries are captured as audit events with timestamps and reason codes
Override Request Workflow
"As an office manager, I want a structured, time-bound override process with documented reasons and approvals so that we maintain flexibility without weakening our governance."
Description

Provide an in-app exception process that captures requester, reason, scope (person/event/listing), approver role, and time-bound duration. Support approval routing, comments, attachments, and single-use passes. Auto-expire overrides with reminders, allow revoke, and store all outcomes in the audit log. Surface override status inline in scheduling and check-in UIs and expose governance controls to limit who can request and approve within each office.

Acceptance Criteria
Office Governance Controls and Role Limits
Given an office admin configures governance in the policy profile When they define which roles can request overrides by scope (person/event/listing) Then only users in allowed roles see the “Request Override” control in relevant UIs and API endpoints Given approver roles are defined per office When a request is created Then only users holding the configured approver role(s) in that office can approve or reject Given a user belongs to multiple offices When evaluating permissions Then request/approval rights are determined by the listing’s office policy profile Given governance settings are modified When the admin saves changes Then new limits apply to subsequent requests/approvals immediately and the change is written to the audit log Given a user without permission attempts to create or approve via UI or API When the action is submitted Then the system returns HTTP 403 and no request is created or state changed
Override Request Submission and Data Capture
Given a user with permission initiates an override request When they submit the form Then the system requires and stores: requester ID, reason (minimum 10 characters), scope type (person/event/listing) with corresponding identifier, approver role, start datetime (defaults to now), and end datetime or duration Given optional fields for comments and attachments When provided Then up to 5 attachments (PDF, JPG, PNG) each <= 10 MB are accepted, virus‑scanned, and associated with the request Given a required field is missing or invalid When the user submits Then the system blocks creation and displays inline validation messages identifying each offending field Given a valid submission When the request is created Then it is assigned a unique ID, enters Pending state, and notifications are sent to the requester and eligible approver(s)
Approval Routing and Decisioning
Given a request in Pending state When routing executes Then it targets all users holding the configured approver role(s) for the listing’s office; any one eligible approver’s decision finalizes unless the office policy profile specifies multi‑approver, in which case all required approvals must be recorded Given the requester also holds the approver role When they view the request Then the Approve/Reject actions are disabled and an explanatory message is shown Given an approver reviews a request When they approve or reject Then a decision comment (minimum 5 characters) is required, optional attachments may be added, and the decision is timestamped Given approval is recorded When the decision is saved Then the request transitions to Active for the defined scope and time window and notifications are sent Given rejection is recorded When the decision is saved Then the request transitions to Rejected with the approver’s comment and notifications are sent
Single‑Use Pass Issuance and Enforcement
Given the requester selects Single‑Use Pass as the override type When the request is approved Then the system issues a one‑time token/QR code bound to the specified scope and validity window Given a check‑in occurs When the single‑use token is scanned or submitted Then the system verifies it is unused, unexpired, and scope‑matched; if valid, it marks the token Consumed and allows check‑in Given a token is expired or already consumed When it is presented at check‑in Then check‑in is denied with a message indicating the pass is expired or already used Given a token is revoked prior to use When it is presented Then check‑in is denied and the attempt is logged
Auto‑Expiry, Reminders, and Enforcement
Given an Active override with an end datetime When the current time is T‑24h (default) before expiry Then reminder notifications are sent to the requester and approver (timing configurable per office) Given the current time exceeds the end datetime When scheduling or check‑in validation occurs Then the override is treated as invalid and not applied Given an override specifies a duration instead of an explicit end datetime When it is created Then the system computes end datetime as start + duration and uses it for all validations and reminders Given an event is scheduled that depends on an override set to expire before the event start When the user attempts to finalize scheduling Then the UI blocks the action and displays an expiry warning with a CTA to extend or request a new override
Revocation by Approver or Admin
Given an Active override When an eligible approver or office admin selects Revoke and confirms Then the override transitions to Revoked immediately and the requester is notified Given revocation occurs When propagation completes Then scheduling and check‑in UIs reflect Revoked status and enforce denial within 5 seconds Given a non‑authorized user attempts to revoke When they submit the action via UI or API Then the system returns HTTP 403 and no state change occurs Given a revocation is recorded When saved Then the audit log captures revoker ID and role, timestamp, comment (if provided), previous state, and new state
Audit Logging and Inline Status Surfacing
Given any request lifecycle event (create, route, comment, attach, approve, reject, activate, consume, expire, revoke) When it occurs Then an immutable audit record is written with: request ID, listing ID, actor ID and role, action type, timestamp (UTC), previous state, new state, scope, reason, approver role, validity window, and attachment metadata Given an auditor filters logs by listing, date range, requester, or outcome When results are requested for up to 10,000 records Then the response returns within 2 seconds Given an export is requested for the current filter When the user triggers CSV export Then the file downloads within 10 seconds and contains all displayed fields Given a scheduling or check‑in screen for an item affected by an override When the screen loads Then an inline badge shows status (Pending/Active/Expired/Revoked), scope, approver, and expiry; selecting it opens the request detail view
Audit Trail & Compliance Reporting
"As a compliance lead, I want a comprehensive audit trail and reports on policy decisions and exceptions so that we can demonstrate controls and identify gaps."
Description

Create immutable, queryable logs for policy evaluations, enforcement outcomes, and override lifecycles (requested, approved, revoked, expired). Provide filters by office, listing, agent, event type, and date range, with CSV export and shareable reports. Include summary dashboards of verification completion rates, exception rates, and SLA adherence. Apply data minimization, retention settings, and access controls suitable for compliance reviews (e.g., SOC 2).

Acceptance Criteria
Immutable Audit Log for Policy Evaluations and Enforcement Outcomes
Given a policy evaluation occurs on a showing request, When the evaluation is executed, Then an immutable audit entry is appended capturing: event_id, timestamp (UTC ISO 8601), actor_id (if any), office_id, listing_id, agent_id (if any), event_type, policy_profile_id, evaluation_result (pass|fail), required_checks, enforcement_outcome (allowed|blocked|warned), and correlation_id. Given an existing audit entry, When any user attempts to update or delete it via UI or API, Then the operation is rejected with HTTP 403 and the attempt is logged as a separate security event. Given a sequence of audit entries, When integrity is verified, Then each entry includes a cryptographic hash and previous_hash forming a tamper-evident chain; any break yields a failed integrity check flag. Given audit entries exist, When retrieved by ID, Then the payload matches the originally written values byte-for-byte and includes a checksum.
Override Lifecycle Logging and Auditability
Given an override is requested, When the user submits the request, Then an audit entry is created with override_id, requester_id, reason (1–500 chars), target_scope (listing|team|office), rule_id, requested_duration (start/end), and status=requested. Given an override is approved, When an authorized approver approves, Then a new entry is appended with override_id, approver_id, approved_until, approval_reason (1–500 chars), and status=approved. Given an active override, When it is revoked by an approver, Then a new entry is appended with override_id, revoker_id, revoked_at, revoke_reason, and status=revoked. Given an active override reaches approved_until, When the time passes, Then a new entry is appended with status=expired. Given the override lifecycle, When queried by override_id, Then all state transitions are returned in order with timestamps and actors.
Report Filtering by Office, Listing, Agent, Event Type, and Date Range
Given audit data across multiple dimensions, When a user applies filters for office, listing, agent, event_type, and date_range (UTC, inclusive), Then only matching records are returned. Given filter controls, When multiple values are selected per dimension, Then results reflect the union across selected values and the intersection across dimensions. Given no records match, When filters are applied, Then a zero-results state and total_count=0 are returned. Given up to 500k matching records, When filters are applied, Then the first page loads within 2 seconds p95 and pagination returns stable ordering by timestamp desc. Given a user without access to an office, When they apply a filter for that office, Then no data is returned and the attempt is logged.
CSV Export and Shareable Report Links
Given a filtered result set, When the user clicks Export CSV, Then a UTF-8 CSV is generated with a header row and columns: event_id, timestamp_utc, office_id, listing_id, agent_id, event_type, actor_id, policy_profile_id, evaluation_result, enforcement_outcome, override_id, status, reason, approver_id (nullable), and correlation_id. Given values contain commas or newlines, When exported, Then fields are RFC 4180 compliant with quotes and escaped characters; line endings are LF. Given a large result (>100k rows), When exported, Then the CSV is streamed in chunks and completes within 5 minutes p95 without truncation. Given a user generates a shareable report link, When created, Then the link is signed, view-only, scoped to the saved filters, expires within a configurable TTL (1–30 days), can be revoked, and enforces authentication or a one-time access code per policy. Given a shareable link is accessed, When viewed, Then access is logged with viewer identifier, timestamp, and IP, and only minimized fields are shown.
Compliance Dashboard Metrics (Verification, Exceptions, SLA)
Given a selected date range and scope, When the dashboard loads, Then it displays: verification_completion_rate = verified_events / total_events, exception_rate = events_with_overrides_or_failures / total_events, and SLA_adherence = overrides_decided_within_SLA / total_overrides, each as percentages with numerator/denominator counts. Given dashboard tiles, When the user switches granularity (daily, weekly, monthly), Then values recompute correctly and match raw log queries within ±0.5%. Given a defined SLA per office (e.g., 24h for override decisions), When computing SLA_adherence, Then decision time is measured from requested_at to approved_or_revoked_at and excludes pending requests. Given new data arrives, When viewed, Then dashboards reflect data updated within the last 15 minutes and show the data as-of timestamp. Given drill-down is invoked from a tile, When clicked, Then the user is taken to the filtered log query that produced the metric.
Data Minimization, Retention, and Access Controls
Given audit and report views, When rendered or exported, Then PII (e.g., full names, emails, phone numbers) is excluded or masked; only IDs and policy-related fields are shown unless the user has the PII_View permission. Given an office retention policy is set (90–730 days, default 365), When data exceeds its retention window, Then logs are purged or archived automatically, and a system audit entry records the action with counts and ranges affected. Given retention is applied, When running a report, Then no records older than the retention window are returned or exported. Given role-based access controls, When a user without Compliance_Report role attempts to access audit logs or dashboards, Then access is denied with HTTP 403 and the attempt is logged. Given cross-office tenancy, When a user is scoped to specific offices, Then all queries and exports are limited to those offices, and cross-tenant data leakage tests return zero records. Given any access to audit logs or exports, When performed, Then the access event is itself logged with user_id, action, timestamp, scope, and result.
Policy Admin Console
"As an office admin, I want an intuitive console to configure and preview policies so that I can deploy changes confidently without developer assistance."
Description

Deliver an admin UI to create, edit, preview, and publish Policy Profiles. Include templates, cloning, diff and version history, impact analysis (who is affected), and conflict detection across scopes. Provide a test mode with sample listings/events to validate behavior before publish. Enforce role-based access control and capture change summaries for every publish action.

Acceptance Criteria
Create Policy Profile via Template or Clone
Given a user with PolicyAdmin role is in the Admin Console, When they select "New Profile" and choose a template, Then a draft profile is created with all template fields pre-populated. Given a user with PolicyAdmin role is in the Admin Console, When they select "Clone" on an existing profile, Then a new draft profile is created with identical rules and a system-suggested unique name suffix. Given a draft profile, When the user modifies fields and clicks Save Draft, Then the system persists changes and updates a visible last-saved timestamp. Given a draft profile with a name that duplicates an existing active or draft profile, When the user attempts to save, Then the system blocks save with a validation message requiring a unique name.
Edit, Preview, and Test Mode with Sample Listings/Events
Given a draft or published profile, When the user opens Preview, Then the UI displays the effective verification requirements for a selected event type (open house or private tour) and scope. Given Test Mode is enabled, When the user selects a sample listing and event type and applies the profile, Then the system shows pass/fail outcomes for verification checks without modifying production data or sending notifications. Given Test Mode results are displayed, When the user changes a rule, Then the preview recalculates outcomes within 1 second for up to 50 sample cases.
Publish with Change Summary, Diff, and Version History
Given a draft profile with changes, When the user clicks Publish, Then the system requires a non-empty change summary of 10–500 characters. Given the user confirms Publish, Then the system creates a new immutable version with incremented version number and records publisher, timestamp, and change summary. Given a versioned profile, When the user opens Version History, Then they can view previous versions read-only and a side-by-side diff highlighting added, removed, and modified rules. Given a prior version is selected, When the user clicks Revert, Then a new version is created from that prior state (no deletion), and the action is logged.
Impact Analysis Before Publish
Given a draft profile scoped to office, team, or listing, When the user runs Impact Analysis, Then the system returns counts of affected listings, teams, and upcoming events within the scope and their IDs. Given the impact result set is 5,000 entities or fewer, When analysis runs, Then results return within 5 seconds. Given Impact Analysis results are displayed, When the user exports, Then a CSV download includes entity IDs, names, scopes, and first applicable event date.
Conflict Detection Across Scopes
Given existing published profiles at office, team, and listing scopes, When a draft profile introduces contradictory verification requirements, Then the system flags conflicts at save time and in the publish checklist. Given a flagged conflict, When the user clicks View Conflict, Then the UI shows the specific rules, precedence resolution, and impacted entities count. Given a severe conflict violates precedence rules, When the user attempts to publish, Then the system blocks publish until the conflict is resolved or the user applies an allowed override with justification and approver.
Role-Based Access Control and Audit Logging
Given a user without PolicyAdmin or OfficeOwner role, When they access the Admin Console, Then they see read-only views and cannot create, edit, test, or publish. Given a user without publish permission, When they call the publish API, Then the system returns HTTP 403 and logs the attempt with user ID, IP, and timestamp. Given any write action (create, edit, publish, revert), When it completes, Then an audit record is stored with user, action, target, diff hash, and timestamp.
Draft Validation and Save/Publish Guardrails
Given a draft profile with missing required fields (name, scope, at least one rule), When the user attempts to save or publish, Then the system displays inline validation errors and prevents the action. Given a draft with invalid rule syntax or incompatible rule combinations, When validation runs, Then specific error messages identify the offending fields and suggested fixes. Given all validations pass, When the user publishes, Then the status of the profile transitions to Active within 2 seconds and becomes selectable in scheduling workflows.
Notifications & Escalations
"As a team lead, I want timely, configurable alerts about policy exceptions and verification blockers so that I can keep showings on track and compliant."
Description

Implement configurable notifications for override requests, approvals, expirations, policy violations, and pending verifications. Support channels including in-app, email, SMS, and Slack with batched digests and escalation rules (e.g., if no approver response within 2 hours, notify backup). Allow per-office schedules, snooze, and mute controls, with audit logging of delivery and acknowledgement.

Acceptance Criteria
New Override Request: Approver Notification
- Given an override request is submitted for a listing under an office Policy Profile with approver A and channels in-app and email enabled, When the request is saved, Then approver A receives an in-app notification within 5 seconds and an email within 60 seconds containing requester name, listing ID, reason, requested duration, and Approve/Reject links. - And the request appears in the Approvals queue for approver A within 5 seconds. - And an audit log entry is recorded for each channel with timestamp, channel, recipient, template ID, and delivery status=queued/sent. - And duplicate notifications for the same request and channel are not sent unless the request state changes.
Escalation to Backup Approver on No Response
- Given an override request awaiting decision and an escalation rule of 2 hours with backup approver B configured, When no decision is recorded by approver A within 2 hours of first notification, Then approver B is notified via their configured channels within 60 seconds and the request status is marked escalated. - And the SLA timer pauses during the office's quiet hours and resumes at the next allowed window. - And if approver A decides after escalation, Then all pending escalation notifications are auto-closed and audit entries reflect resolution actor and time. - And all escalation notifications and acknowledgements are audit logged with correlation IDs linking to the original request.
Temporary Override Expiration Reminders and Auto-Expire
- Given a temporary override is approved with an expiration at time T, When the time is T-24h and again at T, Then the approver and listing owner each receive reminders via their configured channels with an option to extend or revoke. - And if the override is extended before T, Then previously scheduled expiration notifications are canceled and new reminders are scheduled accordingly. - And if the override reaches T without extension, Then its status updates to expired, policy enforcement resumes, and an audit entry records the transition and delivery outcomes. - And reminder delivery respects office schedules, queuing during quiet hours unless marked as critical by policy.
Policy Violation Notification with Grace Window
- Given a showing is scheduled that violates a Policy Profile (e.g., required verification missing) and no active override exists, When the event is created, Then the listing agent and office compliance Slack channel are notified within 30 seconds (Slack) and 5 seconds (in-app) with violation details and remediation steps. - And if the required verification completes within a configurable 5-minute grace window, Then the violation is auto-resolved, recipients receive a resolution update, and the violation is marked resolved in the audit log. - And if Slack delivery fails, Then email is attempted within 2 minutes and the failure and retry are captured in the audit log.
Batched Digest for Pending Verifications and Violations
- Given an office has a digest schedule set to 09:00 and 16:00 local time and recipients defined (agents and team leads), When a digest window triggers, Then a single email and a single Slack message per recipient are sent summarizing pending verifications and unresolved violations with item counts and top 10 items by age. - And items included in a digest are not re-sent in the same day unless their status changes. - And digests are rendered in the office's time zone and respect quiet hours by deferring to the next allowed window. - And each digest send is audit logged with recipient, channel, counts per category, and delivery result.
Per-Office Quiet Hours, Scheduling, and Channel Fallbacks
- Given an office schedule defines quiet hours from 20:00–08:00 local time and fallback rules (Slack->Email->SMS), When a notification is generated at 21:30, Then it is queued and delivered at 08:00 unless flagged as critical by policy. - And when the primary channel fails (non-2xx Slack webhook or email bounce), Then the system retries once and falls back to the next channel within 2 minutes, preserving a single correlation ID across attempts. - And queued notifications show a pending state in-app with expected delivery time. - And all retries, fallbacks, and outcomes are recorded in the audit log.
User Snooze, Mute, and Acknowledgement Controls
- Given a user sets a snooze for violation notifications for 2 hours, When new violations occur during the snooze window, Then the user receives no per-item notifications and a single summary is delivered when the snooze ends. - And critical escalations bypass snooze only if the office policy 'allow escalation during snooze' is enabled. - And when a user mutes a category, Then no future notifications of that category are delivered to that user until unmuted by the user or an admin, and a confirmation is required at mute time. - And when a user views and marks a notification as read, Then the acknowledgement is recorded with user, timestamp, notification ID, and channel in the audit log.
API & Webhooks for Policy Governance
"As a brokerage IT lead, I want APIs and webhooks for policy governance so that our internal systems and CRM can automate setup and stay in sync with TourEcho."
Description

Expose secure REST/GraphQL endpoints to manage Policy Profiles, retrieve active policy evaluations, submit override requests, and access audit events. Provide webhooks for key lifecycle events (policy published, override approved/expired, violation detected). Integrate with MLS/CRM imports to auto-assign default profiles based on listing metadata. Include OAuth scopes, rate limiting, idempotency, and SDK samples.

Acceptance Criteria
OAuth Scopes & Transport Security Enforcement
Given all API endpoints are served over TLS 1.2+ When a client sends a request over non‑TLS Then the request is rejected and a 400 error with code "insecure_transport" is returned Given a request without an Authorization: Bearer token When it calls any protected endpoint Then the API returns 401 with WWW-Authenticate: Bearer Given a token with scope policy.read only When it attempts to call a write endpoint (e.g., POST /v1/policies) Then the API returns 403 with error code "insufficient_scope" and the response lists the required scope policy.write Given a token with scope evaluation.read When it calls GET /v1/evaluations Then the API returns 200 with the evaluation payload Given an expired or revoked token When any endpoint is called Then the API returns 401 with error "invalid_token" Given a valid token with scopes policy.read, policy.write, evaluation.read, override.write, audit.read, webhook.manage When the corresponding endpoints are called Then only operations covered by the granted scopes succeed and others fail with 403
Policy Profile & Evaluation APIs
Given POST /v1/policies with a valid body and scope policy.write When the request is made Then the API returns 201 Created with fields id, version=1, etag, and the persisted policy matches the request Given a GraphQL mutation createPolicy with valid input and scope policy.write When executed Then the response contains the new policy id and version and no GraphQL errors Given PATCH /v1/policies/{id} with If-Match set to the latest ETag When the payload is valid Then the API returns 200, increments version, updates updatedAt, and returns a new ETag; When the If-Match is stale Then the API returns 412 Precondition Failed Given DELETE /v1/policies/{id} with scope policy.write When called Then the API returns 204 and the policy is marked deleted; a corresponding audit event policy.deleted is recorded Given GET /v1/policies?limit=50&cursor=… with scope policy.read When called Then the API returns 200 with up to 50 items and a nextCursor when more results exist Given GET /v1/audit-events?resourceId={policyId}&eventType=policy.*&from=…&to=… with scope audit.read When called Then the API returns 200 with events ordered by occurredAt desc and supports cursor pagination Given GET /v1/evaluations?listingId={id} with scope evaluation.read When called Then the API returns 200 with effectivePolicyId, sourcePrecedence, and requiredVerifications; When the listing does not exist Then 404 is returned; P95 latency for this endpoint is ≤ 300 ms in the sandbox region
Idempotent Writes & Rate Limiting
Given a POST /v1/policies with header Idempotency-Key: K and body B When the first request succeeds Then the API returns 201 with id=P and stores the idempotency fingerprint; When a subsequent request arrives within 24h with the same K and identical body B Then the API returns 200 with Idempotent-Replay: true and the same id=P Given a POST /v1/policies with the same Idempotency-Key: K but a different body B2 When called Then the API returns 422 with error code "idempotency_key_conflict" Given any write endpoint (POST, PATCH, DELETE) When requests exceed 100 per minute per OAuth client Then the API returns 429 Too Many Requests and includes X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset, and Retry-After headers; When under the limit Then responses include accurate rate-limit headers
Override Request Lifecycle & Audit Trail
Given POST /v1/overrides with target={listingId|teamId|eventType}, reason, requestedBy, and duration or expiresAt and scope override.write When valid Then the API returns 201 with status=pending and an audit event override.requested is recorded including requester, reason, and window Given POST /v1/overrides/{id}/approve with approverId and scope override.write When called on a pending override Then the API returns 200 with status=approved, effectiveFrom set, and an audit event override.approved is recorded including approver and rationale Given an approved override with expiresAt in the past When the scheduler processes expirations Then the override transitions to status=expired, an audit event override.expired is recorded, and subsequent evaluations ignore the override Given multiple overrides affecting the same subject When evaluations are computed Then conflicts are resolved deterministically using precedence listing > team > office and latest effectiveFrom within the same scope Given GET /v1/overrides?target={…}&status=active with scope override.write When called Then the API returns 200 with the currently active and pending overrides for that target Given an override request missing reason or exceeding a maximum duration of 30 days When submitted Then the API returns 422 with field-level validation errors
Webhooks for Key Lifecycle Events
Given a registered webhook destination with scope webhook.manage and subscribed events [policy.published, override.approved, override.expired, violation.detected] When a matching event occurs Then TourEcho sends an HTTPS POST within 10s containing {eventId, eventType, occurredAt, resourceRef, payload} Given a webhook delivery When the receiver responds with any 2xx within 10s Then the event is marked delivered; When the receiver responds non-2xx or times out Then TourEcho retries with exponential backoff for up to 12 attempts over 24h before dead-lettering Given a shared secret S for the webhook When TourEcho delivers an event Then headers X-TourEcho-Timestamp and X-TourEcho-Signature (HMAC-SHA256 over timestamp||payload using S) are included; When the receiver verifies within a 5‑minute tolerance Then the signature matches Given duplicate deliveries due to retries When the receiver inspects the event Then the same eventId and X-Webhook-Idempotency-Key are present to enable safe deduplication; Delivery order is best-effort per resource and a sequence number is included Given a policy is published, an override is approved or expired, or a policy violation is detected by evaluation When these occur Then the corresponding webhook (policy.published, override.approved, override.expired, violation.detected) is emitted with accurate resource references
MLS/CRM Auto-Assign Default Profiles
Given an MLS/CRM import creates or updates a listing with metadata (officeId, teamId, propertyType, priceBand) When processed Then a default Policy Profile is assigned within 60 seconds based on mapping rules Given multiple applicable mappings When assignment occurs Then precedence is applied as listing-specific > team > event-type > office default and the chosen mapping id is recorded Given no applicable mapping exists When processed Then the office default policy is assigned Given a policy is auto-assigned When complete Then an audit event policy.published is recorded with source="auto-assign" and mappingRef; GET /v1/audit-events can retrieve it Given GET /v1/evaluations?listingId={id} after assignment When called Then effectivePolicyId reflects the assigned policy within 60 seconds of import Given a mapping application fails (e.g., invalid profile id) When processed Then an audit event policy.assignment_failed is recorded with error details and the error is exposed via /v1/audit-events
SDK Samples & Quickstarts
Given public SDKs for Node.js, Python, and Java When a developer follows the README Quickstart Then they can authenticate via OAuth, create a policy, retrieve an evaluation, submit and approve an override, and receive/verify a webhook in under 10 minutes against the sandbox Given each SDK repository When CI runs Then unit tests pass, the code lints/compiles, and a sample runner script exits with code 0 after exercising idempotent writes and 429 retry handling Given each SDK release When published Then it uses semantic versioning and is available via npm, PyPI, and Maven Central respectively with signed artifacts where applicable Given the webhook sample in each SDK When executed Then it verifies X-TourEcho-Signature using HMAC-SHA256 and rejects payloads with timestamps older than 5 minutes

Trust Analytics

Dashboards track verified vs. unverified rates, MLS coverage, and the effect of verification on response quality and conversion. Broker‑owners and ops leads spot gaps, tighten policies, and demonstrate ROI to sellers with clear, portfolio‑level metrics.

Requirements

Verification Data Pipeline
"As an ops lead, I want all showings and feedback consistently tagged with verification status and MLS context so that trust metrics are accurate and comparable across our portfolio."
Description

Build an end-to-end data pipeline that ingests showing events, QR feedback, and MLS records; normalizes and deduplicates them; and tags each interaction with verification status, confidence, and MLS context. Provide real-time streaming with backfill capabilities, robust id-resolution, and fraud heuristics to assign verification confidence. Emit a curated analytics table (e.g., trust_facts) with consistent schema for dashboards and BI. Include observability (freshness, completeness), retry/error handling, PII minimization, and retention controls to ensure accuracy and compliance while forming the foundation for Trust Analytics.

Acceptance Criteria
Real-Time Streaming Ingestion & Tagging
- Given streaming inputs from showing events and QR feedback, when events arrive under normal load, then p95 end-to-end ingestion-to-tagging latency is ≤ 60 seconds. - Given network/transient failures, when a batch fails, then the system retries at least 3 times with exponential backoff and moves failed records to a dead-letter queue within 1 minute, preserving payload and error reason. - Given duplicate event delivery from a source, when the same source_event_id is seen within 24 hours, then exactly one curated record is produced and duplicates are logged with dedup_reason. - Given an ingested event, when tagging runs, then verification_status ∈ {"verified","unverified","suspected"} and verification_confidence ∈ [0,1] are populated, non-null, and consistent across downstream tables. - Given a full business day, when measuring ingestion completeness, then ≥ 99.5% of source events with valid schema are successfully processed to the curated layer.
Cross-Source ID Resolution
- Given events for the same interaction across sources, when id-resolution runs, then a stable interaction_id (UUIDv4) is assigned consistently across all related records. - Given deterministic keys (listing_id + showing_start_ts within ±15 minutes + agent_license or agent_phone_hash), when all are present, then a deterministic link is created with match_decision_type = "deterministic". - Given insufficient deterministic signals, when probabilistic matching is applied, then records are linked only if match_score ≥ 0.92, else match_decision_type = "none". - Given any link decision, when the record is emitted, then match_score, match_inputs, and match_decision_type are populated for auditability. - Given weekly monitoring, when collision analysis runs, then erroneous merges later reversed are ≤ 0.1% of linked interactions, with automated remediation producing corrected records within 24 hours.
Normalization and Deduplication
- Given curated outputs, when schema validation runs, then 100% of records contain required fields: interaction_id, listing_id, source_type, event_ts_utc, agent_id_hash, visitor_role, verification_status, verification_confidence, mls_id, office_id, fraud_flags. - Given field normalization rules, when records are processed, then timestamps are UTC ISO8601, emails/phones are SHA-256 salted hashes, MLS IDs are uppercased/trimmed, and enums validate against registry with zero invalid values. - Given the uniqueness constraint, when records share (source_type, source_event_id) within 30 days, then only one record persists in curated with dedup_reason captured for suppressed copies. - Given validation failures, when required fields are missing or invalid, then the record is quarantined within 2 minutes with a machine-readable reason; quarantine backlog remains < 0.1% of daily volume. - Given PII minimization policy, when curated and trust_facts are produced, then no plaintext PII (email, phone) is present; raw staging PII TTL ≤ 24 hours with daily purge success rate 100% and audit log retained 90 days.
MLS Context Enrichment & Coverage Tagging
- Given listing_id is present, when MLS enrichment runs, then mls_id, office_id, and agent_of_record_id are populated for ≥ 99% of listings in covered MLSes. - Given enrichment outputs, when coverage is assessed, then each record has mls_coverage ∈ {"covered","unmapped","stale"}, where stale = MLS mapping age > 24 hours. - Given MLS API unavailability, when retries occur, then enrichment retries for up to 30 minutes; after expiry, records are marked unmapped with enrichment_error populated and are eligible for async re-enrichment. - Given daily reporting, when coverage metrics are computed, then mls_coverage_rate (covered/active) is produced and stored for dashboard consumption by 06:00 UTC.
Fraud Heuristics & Verification Confidence
- Given rule-based heuristics, when verification_confidence is computed, then the score is in [0,1], fraud_model_version is populated, and rules/weights are versioned and retrievable. - Given high-severity triggers (e.g., device_fingerprint reused across ≥ 5 listings in 24h; geofence distance > 100 km at check-in; MLS agent mismatch), when any trigger fires, then verification_status = "suspected" and corresponding fraud_flags[] are populated. - Given MLS alignment and strong signals, when verification_confidence ≥ 0.85 and agent_of_record matches showing agent or trusted delegate, then verification_status = "verified"; else status remains "unverified" or "suspected" according to rules. - Given a labeled offline evaluation set, when weekly evaluation runs, then precision for the verified class ≥ 0.90 and metrics (precision/recall/thresholds) are logged with fraud_model_version.
Backfill and Late-Arriving Data Reconciliation
- Given a requested UTC date range, when backfill runs, then it is idempotent: rerunning produces identical curated record counts and checksum per partition. - Given late-arriving events ≤ 7 days old, when they are ingested, then they are merged with existing interactions, re-scored, and re-emitted with updated_at set; downstream partitions reflect changes within 30 minutes. - Given backfill load, when processing ≥ 200k records/hour, then streaming p95 latency increases by no more than +10% during the run. - Given partitioned downstream sinks, when a partition write fails, then the operation is rolled back or retried automatically until success, preventing partial publishes.
trust_facts Table Publication & Contracts
- Given hourly publishing, when the top-of-hour job completes, then trust_facts latest partition freshness p95 ≤ 60 minutes. - Given the schema contract, when changes are required, then only backward-compatible additions are made in-place; breaking changes result in a new versioned table (e.g., trust_facts_v2) with dual-publish during migration. - Given lineage requirements, when rows are emitted, then source_event_id, source_type, match_decision_type, fraud_model_version, processed_at, and updated_at are populated. - Given uniqueness constraints, when data quality checks run daily, then there is exactly one row per (interaction_id, event_type) with 0 duplicates and 0 null interaction_id. - Given retention policy, when housekeeping runs daily, then raw/staging TTL ≤ 24 hours, curated interaction-level data TTL = 180 days, and trust_facts aggregated facts TTL = 2 years; purge success rate = 100% with audited counts.
Metric Definitions & Computation Engine
"As a broker-owner, I want standardized, trustworthy trust metrics with flexible filters and cohorts so that I can measure verification impact and benchmark teams."
Description

Create a centralized metric layer that defines and computes verified vs. unverified response rates, MLS coverage %, verification adoption over time, response quality scores (completeness, specificity, sentiment signal-to-noise), and conversion outcomes (second showing rate, offer rate, days-on-market delta). Support cohorting by brokerage, office, agent, listing attributes, price band, MLS, and time windows. Provide statistical controls to estimate verification impact via matched cohorts and pre/post analyses, with versioned metric definitions, backfills on definition changes, and exposure via a semantic layer and API.

Acceptance Criteria
Verified vs Unverified Response Rates Computation
Given a fixture with 200 invitations in [2025-01-01T00:00:00Z, 2025-01-31T00:00:00Z), where 120 are verifiable and 80 are non-verifiable, and 3 verifiable and 2 non-verifiable invitations opted out prior to the window And 48 verified responses and 20 unverified responses are recorded in-window And time windows are evaluated as [start, end) in UTC When the metric engine computes response rates for this window Then verified_response_rate = 41.03% (48/117) rounded half-up to 2 decimals and stored as a numeric with scale=4 And unverified_response_rate = 25.64% (20/78) rounded half-up to 2 decimals and stored as a numeric with scale=4 And overall_response_rate = 34.87% (68/195) rounded half-up to 2 decimals And verified_share_of_responses = 70.59% (48/68) rounded half-up to 2 decimals And invitations with unknown verification_status are excluded from both numerators and denominators and surfaced as unknown_response_count And soft-deleted or suppressed invitations and responses are excluded from all calculations And recomputing the same window is idempotent and yields identical values
MLS Coverage Percentage Calculation
Given an active-portfolio fixture of 100 listings in scope (listing_status in ['Active','Pending']) for [2025-02-01T00:00:00Z, 2025-02-28T00:00:00Z) And 76 listings have a valid, matched MLS ID present in the MLS reference table with status='active' And listings with listing_status not in scope (e.g., 'Coming Soon','Withdrawn','Expired') are excluded from both numerator and denominator When the metric engine computes mls_coverage_pct for the scope Then mls_coverage_pct = 76.00% (76/100) rounded half-up to 2 decimals And the metric is exposed with metadata including numerator_count=76 and denominator_count=100 And listings with null or invalid MLS IDs are included in the denominator only
Response Quality Scores Computation
Given three feedback records within [2025-03-01T00:00:00Z, 2025-03-31T00:00:00Z): And F1 has required_answers=5/5, room_comments=[12 tokens, 15 tokens], noun_phrases_present per comment >=1, sentiment_polarity=+0.60, neutral_prob=0.30 And F2 has required_answers=4/5, room_comments=[4 tokens], noun_phrases_present per comment=0, sentiment_polarity=+0.10, neutral_prob=0.70 And F3 has required_answers=5/5, room_comments=[9 tokens, 10 tokens, 8 tokens], noun_phrases_present in 2 of 3, sentiment_polarity=-0.50, neutral_prob=0.20 And completeness_score = required_answered/required_total rounded to 2 decimals And specificity_score = specific_room_comments/total_room_comments where a comment is specific if length>=8 tokens and contains >=1 noun phrase or numeral, rounded to 2 decimals And sentiment_snr = |polarity| / (|polarity| + neutral_prob) rounded to 2 decimals When the metric engine computes per-feedback and aggregate scores Then F1 completeness=1.00, specificity=1.00, sentiment_snr=0.67 And F2 completeness=0.80, specificity=0.00, sentiment_snr=0.13 And F3 completeness=1.00, specificity=0.67, sentiment_snr=0.71 And portfolio_averages over F1..F3 equal completeness=0.93, specificity=0.56, sentiment_snr=0.50 (±0.01 tolerance per value) And non-text submissions or empty comments yield specificity_score=0.00 and are included in denominators
Conversion Outcomes Metrics Calculation
Given a fixture with 10 listings having ≥1 first showing in [2025-02-01T00:00:00Z, 2025-02-28T00:00:00Z) And second_shown is defined as a distinct viewer scheduling a second showing on the same listing within 14 days of their first showing And offer_rate is defined as listings with ≥1 offer within 30 days of first showing divided by listings with ≥1 first showing And days_on_market_delta is defined as median(listing_DOM - market_baseline_median_DOM_by_price_band_and_MLS) for listings in scope And among the 10 listings, 4 have ≥1 second showing within 14 days and 3 have ≥1 offer within 30 days And listing_DOM values are [32,28,45,21,39,30,27,34,31,26] and market_baseline_median_DOM_by_price_band_and_MLS = 34 When the metric engine computes conversion metrics Then second_showing_rate = 40.00% (4/10) rounded half-up to 2 decimals And offer_rate = 30.00% (3/10) rounded half-up to 2 decimals And days_on_market_delta_median = -3.5 days rounded to 1 decimal
Cohorting by Dimensions and Time Windows
Given selectable cohort dimensions brokerage_id, office_id, agent_id, listing_attributes (type,beds,baths), price_band, MLS_id, and time windows And filters brokerage_id=BRK123, office_id in [OFF1, OFF2], MLS_id=ARMLS, price_band=500k-750k, window=[2025-01-01T00:00:00Z, 2025-01-31T00:00:00Z) When computing verified_response_rate grouped by agent_id via the semantic layer Then only rows matching all filters are returned and include an 'Unknown' bucket where a grouping key is null And the sum of group denominators equals the filtered denominator total and the sum of numerators equals the filtered numerator total And changing the window to [2025-01-01T00:00:00Z, 2025-01-15T00:00:00Z) reduces counts and rates recompute accordingly using [start,end) semantics in UTC And multi-select filters apply with AND semantics across different dimensions and OR semantics within a dimension
Verification Impact Estimation (Matched Cohorts and Pre/Post)
Given treated listings that adopted verification and candidate control listings in the same MLS, price_band, and time window And a propensity score or nearest-neighbor matcher balances covariates (MLS, price_band, beds, baths, listing_type, agent_tenure) to standardized mean difference < 0.10 for each covariate And pre-period is 28 days before adoption and post-period is 28 days after adoption with a 7-day washout And minimum sample size is ≥50 treated and ≥50 matched controls When running the impact analysis for second_showing_rate and offer_rate Then the engine outputs uplift estimates with 95% confidence intervals, p-values, and balance diagnostics And for the provided fixture, uplift_second_showing_rate = +7.5 pp [95% CI +3.0, +12.0], p<0.01 and uplift_offer_rate = +3.0 pp [95% CI -1.0, +7.0], p=0.12 And if balance criteria or sample minima are not met, the engine returns status='insufficient_balance' or status='insufficient_sample' without estimates
Metric Versioning, Backfill, and Semantic/API Exposure
Given metric verified_response_rate v1.0.0 defined as verified_responses/verifiable_invitations with effective_from=2024-01-01 And on 2025-03-15T00:00:00Z the definition is updated to exclude opt-outs from the denominator, creating v1.1.0 with a recorded changelog And a backfill is scheduled for [2024-01-01T00:00:00Z, 2025-02-28T00:00:00Z) When the backfill completes Then v1.1.0 values are computed and stored alongside v1.0.0, and historical partitions are updated idempotently with audit logs (job_id, started_at, completed_at, affected_ranges) And for the January 2025 fixture, v1.0.0=40.00% (48/120) and v1.1.0=41.03% (48/117), both retrievable by specifying metric_version And the semantic layer lists metric_key, version, effective_from, deprecated_at (nullable), owner, and definition_sql for each version And the API GET /metrics/timeseries returns deterministic results with schema {metric_key, metric_version, dimensions{}, start, end, value, numerator_count, denominator_count} And unauthorized requests return 401, invalid params return 400, and P95 latency ≤ 800 ms for queries scanning ≤ 1,000,000 rows
Trust Dashboard & Drilldowns
"As a broker-owner, I want clear dashboards with drilldowns so that I can spot gaps and prioritize policy changes that improve response quality and conversion."
Description

Deliver interactive dashboards that present portfolio-level trust KPIs, time-series trends, and comparisons of verified vs. unverified responses, with drilldowns from brokerage to office, agent, and listing. Include coverage heatmaps, target tracking, and filters for date range, MLS, property type, price band, verification method, and campaign. Enable saved views, shareable links, responsive layouts, and role-based data scoping. Optimize for fast query performance to support executive and ops workflows.

Acceptance Criteria
Portfolio KPIs & Target Tracking Overview
Given a user with access to a brokerage portfolio and the default date range is the last 30 days When the user opens the Trust Dashboard Then KPI cards display: Verified Response Rate, Unverified Response Rate, MLS Coverage %, Avg Response Quality Score, Lead-to-Showing Conversion Rate, and Verified vs Unverified Conversion Lift And the metrics are computed as: Verified Response Rate = verified_responses / total_responses; Unverified Response Rate = unverified_responses / total_responses; MLS Coverage % = listings_with_MLS / total_active_listings; Conversion Lift = verified_conversion_rate - unverified_conversion_rate And, given portfolio-level targets exist for these KPIs, each KPI shows the target value and an on-track/off-track indicator based on actual >= target And KPI values respect all active filters and match server-side aggregates for the same scope within 0.5%
Hierarchical Drilldowns from Brokerage to Listing
Given a user in brokerage scope with any active filters applied When the user clicks a KPI card, chart segment, or table row representing an Office Then the dashboard drills into Office context, updates all widgets to Office scope, and retains existing filters When the user selects an Agent within that Office Then the dashboard drills into Agent context with the same filters retained When the user selects a Listing for that Agent Then a listing-level view/panel is shown with listing-scoped metrics and trends And a breadcrumb (Brokerage > Office > Agent > Listing) is displayed and supports one-click navigation back to any higher level And totals at each level reconcile such that the sum of children equals the parent total within rounding tolerance of one record or 0.5%
Global Filters: Date, MLS, Property Type, Price Band, Verification Method, Campaign
Given the dashboard is loaded Then filters are available for Date Range, MLS, Property Type, Price Band, Verification Method, and Campaign And MLS, Property Type, Price Band, Verification Method, and Campaign support multi-select; Date Range supports custom start/end When multiple filters are applied across different dimensions Then results are computed using AND between dimensions and OR within multi-select values of the same dimension And clearing all filters resets the view to the system defaults and updates all widgets accordingly And the active filter state persists during the session across drilldowns and page navigations within the analytics area
Time-Series Trends and Verified vs. Unverified Comparison
Given a valid scope and filters When the user switches time granularity between day, week, and month Then time-series charts re-aggregate accordingly without changing totals for the same overall period When the user toggles Verified vs Unverified comparison Then separate series are displayed and a Conversion Lift series or label is calculated as verified_rate - unverified_rate for the selected metric And the sum of values in the time series for the period matches the corresponding KPI value within 0.5% And missing intervals within the selected date range render as zero values rather than being omitted And, when KPI targets exist, a target line/band is overlaid and an on-track/off-track badge is shown for the current period
Coverage Heatmaps & Gap Identification
Given geography metadata is available When the user opens the Coverage view Then a heatmap renders MLS Coverage % by available geography (e.g., state/county/ZIP) respecting all active filters And a toggle allows switching the color metric between Coverage % and Listings Count And tooltips display area name, Coverage %, Listings Count, and delta to target when targets exist And areas below target are visually highlighted according to the legend And zoom and pan interactions are supported without losing the current filter context
Saved Views and Shareable Links (with Role-Based Scoping)
Given a configured dashboard state (scope, filters, granularity, comparison, and drill level) When the user saves the view with a unique name Then the configuration is persisted and available in a Saved Views list for that user When the user selects a saved view Then the dashboard restores exactly the saved configuration state When the user generates a shareable link for the current state Then the URL encodes the full configuration and requires authentication to open And recipients only see data within their authorized scope; unauthorized dimensions are omitted or masked while preserving the layout And deleting a saved view removes it from the list and invalidates any associated share links
Performance and Responsive Layout SLAs
Given production-sized portfolios When the dashboard is first opened Then the first meaningful visualization renders within ≤ 2s at p50 and ≤ 5s at p95 When applying any single filter or drilldown Then all affected widgets update within ≤ 1.5s at p50 and ≤ 4s at p95 And no interaction shows a loading spinner for more than 6s And on viewports < 768px width the layout collapses to a single column without horizontal scrolling; on ≥ 768px it uses a multi-column grid with legends and controls visible or accessible via a collapse And charts and tables remain readable with labels not overlapping by more than 5% of data points across breakpoints
ROI Uplift Analysis
"As a seller-facing agent, I want verifiable ROI figures attributable to verification so that I can justify policies and win listings with transparent, data-backed results."
Description

Provide automated uplift analysis that quantifies the effect of verification on response quality and conversion outcomes, controlling for confounders such as listing age, price, location, seasonality, and agent tenure. Report estimated uplift with confidence intervals, sample sizes, and practical significance. Offer pre/post policy change views and scenario modeling to forecast gains from increasing verification rates, with plain-language callouts to aid decision-making and seller communications.

Acceptance Criteria
Causal Uplift Computation with Confounder Control
Given production listings with verification_status, listing_age_days, list_price, location_geo, season_week, agent_tenure_months, response_quality_score, and conversion_outcome When the nightly ROI job executes Then the system computes the ATE of verification on response_quality_score and conversion_outcome using a pre-specified doubly robust method with the listed covariates And produces covariate balance diagnostics with standardized mean differences < 0.10 for all covariates, else flags "imbalance" And enforces minimum sample size per group of >= 200 listings; if not met, flags "insufficient power" and suppresses the point estimate from dashboards And stores results with run_id, model_version, and UTC timestamp
Pre/Post Verification Policy Change Analysis
Given a user-selected policy change date and pre/post windows (e.g., 90 days before and after) When the pre/post analysis is run Then the system performs a difference-in-differences estimate of verification’s effect on conversion_outcome with seasonality and location fixed effects And renders a pre-trend check; if the slope-difference p-value < 0.05, displays a "violated pre-trend" warning And outputs estimate, 95% CI, and sample sizes by period/group in a results table and a time series chart with the policy change annotated And allows export of table (CSV) and chart (PDF/PNG)
Scenario Modeling to Forecast Gains from Increased Verification
Given a baseline verification rate and historical uplift estimates When the user sets a target verification rate via slider (0–100%) Then the system forecasts incremental conversions and high-quality responses for the next 90 days with 80% and 95% prediction intervals And displays expected ROI metrics: additional closings, median days-on-market reduction, and hours saved, using documented multipliers And warns "out-of-scope" if target is > ±10 percentage points outside observed historical verification rates And returns the forecast within 2 seconds for portfolios under 50,000 listings using nightly-updated cached models
Reporting of Uplift with Confidence Intervals and Practical Significance
Given valid uplift results are available When the user opens the Trust Analytics ROI view Then each metric card shows point estimate (percent uplift), 95% CI bounds, sample sizes (n_verified, n_unverified), and p-value And shows an effect size (Cohen’s h for proportions or Glass’s Δ for scores) and a practical significance badge per configured thresholds (small/medium/large) And includes metric definitions and data freshness timestamp And displayed values match stored results within rounding rules (±0.1 pp for rates, ±0.01 for scores)
Plain-Language Decision Callouts for Broker/Seller Communication
Given an uplift estimate with confidence intervals and power/pre-trend flags When the narrative callout is generated Then it contains a 1–2 sentence summary at ≤ Grade 9 reading level stating magnitude, direction, and confidence (e.g., "We’re 95% confident verification increases conversion by 3–5 pp.") And includes one portfolio-tailored recommendation (e.g., "Raise verification to 85% to add ~12 closings in 90 days") with assumptions noted And suppresses recommendations when results are flagged "insufficient power" or "violated pre-trend" And provides copy-to-clipboard and an "Export for seller" mode with brokerage branding
Auditability and Reproducibility of ROI Analysis
Given an ROI analysis run is completed When the user selects "View methodology" Then the system displays model type, covariates, eligibility rules, weighting/matching method, and model_version And enables download of input cohorts and matched/weighted datasets as CSV with anonymized IDs And logs run_id, UTC timestamp, data snapshot ID, and user ID, accessible to admins And re-running on the same snapshot yields identical estimates within ±0.01 percentage points
MLS Coverage Map & Integration Health
"As an ops lead, I want visibility into MLS coverage and data health so that I can close gaps that undermine metric accuracy and policy decisions."
Description

Implement MLS coverage reporting that shows which MLSes are connected, partially covered, or missing, along with data completeness for required fields. Track integration health SLAs including data freshness, error rates, and missing records, surfacing issues via a geographic map and tabular breakdown. Provide exports and remediation guidance to close coverage gaps that degrade trust metrics.

Acceptance Criteria
MLS Coverage Map Visualization and Status Legend
Given I am viewing the Trust Analytics > MLS Coverage page When the map loads Then MLS regions in my portfolio render within 3 seconds with a legend showing Connected (green), Partial (amber), Missing (red) Given the legend is visible When I hover each status Then a tooltip displays the rule definition: Connected = 100% pipelines active and >= 98% required-field completeness; Partial = at least one pipeline active but completeness < 98% or any required field < 95%; Missing = no active pipeline Given the map is displayed When I click an MLS region Then a side panel shows its status, last updated timestamp (UTC), and counts of listings in scope Given there are no MLSs configured for my org When I load the page Then the map area shows "No MLS configured" with a link to setup documentation
Per-MLS Data Completeness Scoring for Required Fields
Given an MLS is selected When completeness is calculated Then for each required field (Listing ID, Address, List Price, Bedrooms, Bathrooms, Square Footage, Photos, Status) the UI displays completeness percentage = non-null records / total active listings in scope, rounded to 1 decimal Given completeness metrics are shown When a field has completeness < 95% Then it is flagged with a warning icon and included in the Remediation list Given I change the date range When metrics refresh Then percentages recompute for that window and match backend API values within ±0.1% Given an MLS has zero active listings in the period When metrics are requested Then the UI displays "N/A" for completeness and does not flag as failure
Integration Health SLA Monitoring (Freshness, Error Rate, Missing Records)
Given an MLS integration exists When viewing Integration Health Then the system shows data freshness (max ingest lag in minutes), error rate (% failed jobs), and missing records (% discrepancy vs MLS source count) for the selected date range Given SLA thresholds are configured (Freshness ≤ 60 min, Error rate ≤ 1%, Missing records ≤ 2%) When a metric exceeds its threshold Then the status chip shows Red; if within 20% of threshold, shows Amber; otherwise Green Given I hover a metric When the tooltip appears Then it shows numerator/denominator, sample times, and last successful sync timestamp Given a pipeline outage occurs When status is Red Then an incident banner appears with incident ID and link to details
Tabular Breakdown with Filters and Drill-Through
Given I open the MLS Breakdown tab When data loads Then a table lists one row per MLS with columns: MLS Name, Status, Freshness (min), Error Rate (%), Missing Records (%), Completeness Score (%), Listings in Scope, Last Updated (UTC) Given the table is displayed When I sort by any column Then rows reorder accordingly and the sort icon reflects direction Given I apply filters (status, geography, broker office) When I apply them Then both table and map reflect the same filtered set Given I click an MLS row When the detail drawer opens Then it shows field-level completeness, last 10 errors with timestamps, and a link to remediation guidance
Export of Coverage and Integration Health Metrics
Given filters and date range are set When I click Export Then CSV and XLSX options are offered Given I choose CSV When export completes Then the file downloads within 10 seconds for up to 10,000 rows and contains the current filtered set with columns matching the table plus a field-level completeness JSON column Given I choose XLSX When export completes Then number and percentage formats are preserved, timestamps are UTC ISO 8601, and the first sheet is named MLS Breakdown Given an export is triggered When generation fails Then the user sees a retryable error with correlation ID and no partial file is downloaded
Remediation Guidance and Actions
Given an MLS shows Partial status or failing SLA When I open Remediation Then I see prioritized guidance including likely causes (missing credentials, field mapping gaps, API rate limits) and step-by-step fix instructions Given guidance includes automated actions When I click Request Re-sync Then the system queues a re-ingest job, records my user ID and timestamp, and shows job status until completion Given guidance includes contact options When I click Contact Support Then a prefilled ticket opens containing MLS ID, org ID, failing metrics, and last 24h logs summary Given a remediation action resolves an issue When the next sync completes Then the MLS status and metrics update within 5 minutes and historical trend reflects the change
Alerts & Policy Recommendations
"As a brokerage operations manager, I want proactive alerts and recommended fixes so that I can quickly correct trust gaps before they impact conversion."
Description

Introduce configurable alerts when verification rates fall below thresholds, response quality declines, or MLS coverage gaps emerge. Deliver notifications via in-app, email, and Slack with context, impacted entities, and links to drilldowns. Generate targeted, data-driven recommendations (e.g., enforce verification on specific listings, adjust door-hanger copy) with estimated impact based on historical uplift. Include suppression windows, severity levels, and alert fatigue controls.

Acceptance Criteria
Alert Triggering Across Key Metrics
Given organization-, office-, and listing-level thresholds are configured for verification rate (T_verif), response quality score (T_quality), and MLS coverage (T_mls), and an evaluation cadence of 1 hour over a rolling 7-day window is active When any entity’s 7-day verification rate is < T_verif for 2 consecutive evaluations OR the 7-day average response quality is < T_quality OR the week-over-week change in response quality is ≤ -D% (configurable) OR portfolio MLS coverage is < T_mls OR any listing has an unmapped/unknown MLS Then create a single new alert instance capturing metric type, scope, current value, threshold, baseline delta, evaluation window, and computed severity And do not create a duplicate alert if an open alert exists with the same metric+entity root cause
Multi-Channel Notification Delivery and Fallback
Given delivery preferences for target recipients include in-app, email, and/or Slack with configured endpoints When a new alert instance is created Then deliver an in-app notification to targeted users within 60 seconds of creation And send an email with subject formatted as "[Severity] [Metric] alert for [Entity]" and body containing context and drilldown link within 60 seconds And post a Slack message to the configured channel/user with structured blocks including title, metric snapshot, impacted entities count, severity, and a deep link within 60 seconds And if Slack returns a non-2xx response, retry once; on second failure, mark Slack delivery failed, suppress further Slack retries for this instance, and ensure email delivery proceeds And record per-channel delivery status, timestamp, and any error code in alert metadata
Contextual Payload and Drilldown Linking
Given an alert is delivered via any channel Then the payload includes metric name, evaluation window, threshold, current value, delta vs baseline, severity, trigger timestamp, and impacted entities (listing IDs and/or offices), listing up to 25 with a "+N more" indicator And the drilldown link opens Trust Analytics pre-filtered to the alert’s metric, entity scope, and time window, displaying matching values to those in the alert payload And following the link marks the alert instance as Viewed with viewer identity and timestamp
Data-Driven Policy Recommendations with Estimated Impact
Given an alert is created for a decline in verification, response quality, or MLS coverage When generating recommendations Then produce at least one recommendation from the catalog (e.g., enforce verification on select listings, adjust door-hanger copy, expand MLS mapping), or explicitly state "No recommendation" with reason when insufficient data And each recommendation includes: action text, affected scope, estimated uplift (absolute and %) with 80% confidence interval derived from historical cohorts, rationale summary, estimated time-to-impact, and required permissions And recommendations are ranked by expected value; the top recommendation is displayed first And include Apply and Dismiss controls; Apply creates a draft configuration change for admin review; Dismiss requires a reason and suppresses the same recommendation type for the alert’s root cause for the suppression window
Suppression Windows and Alert Fatigue Controls
Given a suppression window W and daily per-user cap C are configured When an alert triggers for the same metric+entity root cause while an alert instance is open or within W of the last notification Then do not create or notify a duplicate unless severity has increased And total alerts delivered to a user do not exceed C in a rolling 24-hour period; additional non-Critical alerts are batched into a daily digest And a user may Snooze an alert for S hours, suppressing all channel notifications for that instance while preserving state And an alert auto-closes only after the metric returns within threshold for 2 consecutive evaluations; a new breach after closure creates a new alert subject to suppression rules
Severity Levels, Routing, and Escalation
Given severity levels Low, Medium, High, Critical are defined with mappings from deviation magnitude and projected business impact When an alert is created Then compute severity per the configured mapping and persist it with the alert And route notifications per severity rules (e.g., Critical -> exec distribution + Slack priority channel; Medium -> ops distribution), honoring user preferences And if an alert remains Unresolved for E hours, escalate severity by one level once and re-route accordingly, without exceeding daily caps or violating suppression And include severity, routing decisions, and any escalations in the alert audit trail
Admin Configuration of Thresholds, Channels, and Controls
Given an admin with Alerts permissions When the admin sets per-metric thresholds (org/office/listing), evaluation windows, delivery channels by role/team, suppression windows, daily caps, and severity mappings Then the system validates inputs (type, ranges, conflicts) and saves versioned configurations with effective-from timestamps and who/when metadata And changes take effect on the next evaluation cycle without impacting open alerts; future alerts use the latest effective config And a Test Alert function simulates each metric, producing a sandbox alert and exercising delivery to a sandbox channel/email without affecting production counters
Seller-Facing ROI Report Export
"As a listing agent, I want a polished, shareable ROI report so that I can demonstrate the value of verification and our policies to sellers in a credible, digestible format."
Description

Enable generation of branded, seller-ready PDF or secure-link reports that summarize trust KPIs, verification impact, actions taken, and listing-specific highlights. Include charts, plain-language explanations, and benchmarks against office or market averages. Support scheduling and on-demand creation with access controls, watermarks, and share tracking to help agents demonstrate ROI credibly and efficiently.

Acceptance Criteria
On-Demand Seller ROI PDF Generation
Given an authenticated listing agent or broker-owner with access to listing(s) and Trust Analytics and a selected date range, When they click “Generate Seller ROI Report (PDF)”, Then a downloadable PDF is produced within 30 seconds at the 95th percentile; Then the PDF includes sections: Cover (brokerage branding), Executive Summary, Trust KPIs (verified vs. unverified rates), Verification impact on response quality and conversion, Actions Taken, Listing-Specific Highlights, Charts, Benchmarks vs office and market averages, and Methodology/Definitions; Then all charts render without truncation and include axis labels and numeric values; Then all figures match the Trust Analytics dashboard for the same filters within ±0.5 percentage points and ±1 count; Then the file name follows TourEcho_ROI_<ListingID or Portfolio>_<YYYYMMDD>.pdf; Then the PDF has embedded fonts, selectable text, and is tagged for accessibility; Then if generation fails, the user sees an error with a retry option and the failure is logged with correlation ID; Then repeated requests with the same parameters within 5 minutes return the same artifact (idempotent).
Scheduled Seller ROI Report Delivery
Given a broker-owner schedules a recurring Seller ROI Report with cadence (daily/weekly/monthly), local timezone, recipient list, and delivery mode (secure link only or PDF attachment allowed), When the scheduled time occurs, Then the system generates the report using the latest data and dispatches it within 10 minutes; Then email subjects follow “TourEcho ROI — <Listing/Portfolio> — <YYYY‑MM‑DD>” and include a secure link (and attachment if allowed); Then secure links default to a 14‑day expiry (editable 1–60 days); Then schedules honor a “Skip if no changes since last report” toggle; Then delivery failures are retried up to 3 times with exponential backoff and a failure notification is sent to the owner; Then schedule edits take effect for the next run and are tracked in an audit log; Then recipients who unsubscribe stop receiving future scheduled emails for that report.
Secure Link Access Controls and Watermarking
Given an agent creates a secure share link with configured expiry, optional passcode, and permissions (view‑only or download allowed), When a recipient opens the link, Then a valid token is required and, if a passcode is set, it must be entered correctly before content is shown; Then expired or revoked links return HTTP 403 with a branded error page; Then view‑only links disable download/print endpoints and prevent raw PDF URL access; Then every page displays a diagonal watermark "Confidential — <SellerName>/<ListingID> — <ViewerIdentifier or LinkID> — <Date>" at 10–15% opacity; Then watermarks render in both browser view and downloaded PDFs; Then each access attempt (success/fail) is logged with timestamp and IP.
Share Tracking and Analytics
Given a Seller ROI Report is shared via secure link or email, When a recipient views, downloads, or reshares via built‑in Share, Then the system records an event with UTC timestamp, action type (view/download/reshare), viewer identifier when available (email or LinkID), IP‑derived city/region, device type, and referrer; Then the agent can see aggregate and per‑recipient analytics in Trust Analytics within 5 minutes of the event; Then total views, unique viewers, first/last viewed, and downloads are displayed and exportable to CSV; Then analytics respect access revocations and exclude events after revocation; Then data is retained for 12 months and deleted thereafter; Then tracking pixels and link parameters comply with the product’s privacy settings and can be disabled per report.
Brand Customization and Benchmarking
Given a brokerage brand profile with logo (SVG/PNG), primary/secondary colors, and legal disclaimer text, When a Seller ROI Report is generated, Then the cover and chart elements apply brand colors while meeting contrast ratios ≥ 4.5:1 for body text and ≥ 3:1 for large text; Then the brokerage logo appears on the cover and in the footer on all pages at ≥ 150 DPI; Then the legal disclaimer renders on the final page and does not overlap content; Then benchmark sections show office and market averages alongside the listing’s metrics with sample sizes and date ranges; Then if sample size for a benchmark is < 30, a caution note “Limited sample size” is displayed; Then each benchmark cites its source and as‑of date.
Listing-Specific Highlights and Plain-Language Explanations
Given a listing has feedback, sentiment, and room‑level objection data in Trust Analytics for the selected period, When the report is generated, Then the Highlights section includes 3–5 top themes with counts and sentiment trend vs prior period; Then the report lists the top 3 room‑level objections with counts and trend arrows; Then an Actions Taken section shows dated actions, owners, and status; Then the explanatory narrative is AI‑generated at Flesch‑Kincaid grade ≤ 8, free of detected PII (email, phone, full names) per platform redaction rules, and includes per‑metric definitions; Then sensitive comments can be hidden via a toggle and are excluded when the toggle is on; Then all inline callouts reference chart labels exactly and include footnotes for methodology.

IdP Templates

Prebuilt SAML/OIDC configs for Okta, Azure AD, Google, and OneLogin with a guided wizard, copy‑paste claim maps, and one‑click metadata exchange. Cuts setup from hours to minutes, reduces errors, and lets Insights Integrator Ivy or Expansion Ops Ethan get offices live fast with confidence.

Requirements

IdP Template Library
"As an Insights Integrator Ivy, I want to select a prebuilt template for our IdP so that I can configure SSO quickly without memorizing provider-specific details."
Description

Provide prebuilt, versioned SAML 2.0 and OIDC configuration templates for Okta, Azure AD, Google Workspace, and OneLogin that auto-populate provider-specific endpoints, default ACS/redirect URIs, expected NameID/claims, signing/encryption requirements, and recommended security settings. Templates should be maintainable as JSON/YAML assets with semantic versioning and release notes. The library integrates with TourEcho’s auth service to instantiate a connection in one step, while allowing overrides for advanced admins. Expected outcome: reduces setup time and misconfiguration rates by offering battle-tested defaults aligned to each IdP’s conventions.

Acceptance Criteria
Auto-Populate Provider Endpoints and URIs
- Given I select the Okta SAML template and choose the deployment environment, When the template loads, Then IdP SSO URL, IdP EntityID, ACS URL(s) using the correct environment base domain, NameID format=emailAddress, SignedAssertions=true, and recommended security flags are auto-populated. - Given I select the Azure AD OIDC template and choose the deployment environment, When the template loads, Then authorization, token, userinfo, and JWKS endpoints; redirect URIs using the correct environment base domain; scopes=openid email profile; response_type=code; and id_token algorithm=RS256 are auto-populated. - Given I select the Google Workspace SAML or OneLogin OIDC template, When the template loads, Then provider-specific endpoints, URIs, and default claim/NameID expectations are auto-populated from the embedded template metadata.
One-Step Connection Instantiation
- Given a prefilled template and required inputs (tenant identifier and domain), When I create a connection via UI or POST /connections with templateId and inputs, Then a connection is created and returned with id, provider, protocol, templateVersion, status in {pending, active}, and defaults applied. - Then the configuration is persisted and retrievable via GET /connections/{id} with values matching the request and template defaults. - Then the operation completes within p95 ≤ 5s in staging and p95 ≤ 10s in production. - Then an audit event "connection.created" is recorded including actor, templateId, templateVersion, and a list of overridden fields (if any).
Template Versioning and Release Notes
- Given the template assets repository, When I fetch any template, Then it contains id, provider, protocol (SAML|OIDC), version (semver x.y.z), createdAt, updatedAt, releaseNotesUrl, schemaVersion, and optional deprecates field. - Given multiple versions exist, When I request a specific version by id+version, Then that exact version is returned and used for instantiation without auto-upgrading. - Given a major version upgrade exists, When I attempt to upgrade an existing connection, Then I receive a breaking-change warning, a migration guide link from releaseNotesUrl, and must confirm before upgrade; after upgrade, the connection stores the new templateVersion and an audit event is recorded.
Advanced Overrides Before Save
- Given a loaded template, When I edit defaults (e.g., NameID format, claim map, redirect URIs), Then validation enforces schema and provider constraints and blocks save with clear inline errors on invalid entries. - When I save with valid overrides, Then the connection persists overridden values, retains a reference to base templateId and templateVersion, and records a list of overridden fields. - When I select Reset to Defaults, Then overridden fields revert to the base template values prior to save, and after save the connection matches the template defaults for those fields.
Security Defaults Enforcement
- Given SAML templates, When a connection is created, Then SignedAssertions=true is enforced, unsigned assertions are rejected, minimum signing algorithm SHA-256 is required, and clock skew tolerance ≤ 3 minutes is applied. - Given OIDC templates, When a connection is created, Then response_type=code with PKCE required for public clients, implicit and hybrid flows disabled by default, state and nonce are validated, id_token signed with RS256 or ES256, and JWKS cache TTL is honored. - Given any template, When a non-HTTPS endpoint is configured, Then save is blocked with a validation error requiring TLS.
Provider Coverage Completeness
- Given the template library, When I list supported templates, Then Okta, Azure AD, Google Workspace, and OneLogin are present with at least one SAML 2.0 and one OIDC template where the provider supports the protocol. - When templates are validated against the schema, Then 100% pass with no missing required fields (endpoints, ACS/redirect URIs, claim/NameID expectations, signing/encryption flags). - When running endpoint pattern tests, Then all templates' endpoints match documented provider base URLs and paths for the selected region/tenant model.
Setup Time and Misconfiguration Reduction
- Given a standardized onboarding test across 10 representative offices, When configuring SSO using templates from scratch, Then median time-to-first-successful-login is ≤ 10 minutes and p90 ≤ 15 minutes. - Given the same cohort, When measuring first-attempt outcomes, Then misconfiguration rate attributable to template defaults is ≤ 5% on first attempt and ≤ 1% after one guided retry.
Guided SSO Setup Wizard
"As Expansion Ops Ethan, I want a guided setup with validation and a test login so that I can turn on SSO confidently without breaking access for agents."
Description

Deliver a step-by-step wizard that walks admins through provider selection, protocol choice (SAML/OIDC), metadata method (URL/XML upload/manual), domain/tenant scoping, claims/NameID selection, and enablement. Include inline validations, contextual help, and embedded code blocks/instructions to copy into the IdP admin console. Provide a sandbox test within the wizard to perform a non-disruptive login with a test user before activation. Integrates with TourEcho auth APIs to create/update the connection atomically and rolls back on validation failure. Expected outcome: compresses setup into minutes and prevents partial or invalid configurations.

Acceptance Criteria
Provider & Protocol Selection Flow
Given an authenticated org admin opens the wizard When they view Step 1 Then the provider list includes Okta, Azure AD, Google, OneLogin, and Other, and a protocol choice of SAML or OIDC is required before Next is enabled. Given a provider and protocol are selected When the admin clicks Next Then Step 2 displays only fields relevant to the chosen protocol and provider and the selection is persisted if the admin navigates Back/Next. Given no selection is made When the admin attempts to continue Then the Next button remains disabled and an inline prompt indicates what is missing.
Metadata Ingestion & Validation
Given method=URL When the admin enters an https metadata URL and clicks Validate Then the system fetches within 5 seconds, verifies TLS, parses metadata, and surfaces issuer/entityID/cert summary; on parse error, a specific error is shown and Next remains disabled. Given method=XML upload When a file up to 2 MB with XML is uploaded Then the system validates XML schema, extracts certificates and endpoints, and flags expiration dates < 7 days as warnings. Given method=Manual When required fields are entered (ACS/Redirect URI, EntityID/ClientID, Issuer, JWKS/Cert) Then format validations run and Next is enabled only when all required fields pass.
Domain/Tenant Scoping Rules
Given the scoping step When the admin adds one or more email domains (1–50) or a tenant/Directory ID Then entries are validated for format and duplicates are prevented. Given users attempt the sandbox login When the user’s email domain or IdP tenant does not match the configured scope Then access is denied with a scoped access message and no connection changes are persisted. Given scoping is modified When the admin proceeds to Enable Then only users within the configured scope are permitted to authenticate via the connection.
Claims & NameID Mapping with Copyable Instructions
Given the mapping step When a provider is selected Then provider-specific example claim maps and NameID guidance are displayed with a Copy button that copies plain text to clipboard and confirmation toast appears. Given SAML is selected When the admin defines NameID format and required attributes (email, firstName, lastName) Then the wizard validates presence of these in the sandbox assertion and flags missing attributes by name. Given OIDC is selected When the admin selects scopes (openid, email, profile) and maps claims (sub, email, given_name, family_name) Then the wizard validates the token contains required claims and rejects tokens missing email or sub.
Sandbox Test Login (Non-Disruptive)
Given configuration is valid When the admin clicks Test Connection Then a test auth flow is initiated in an isolated context that does not affect production login routes. Given the test completes When the IdP returns tokens/assertion Then results show success/failure, round-trip time, issuer, subject, and mapped claims, and store an audit record with timestamp, test user, and outcome. Given no successful test in the last 30 minutes When the admin attempts to Enable SSO Then the action is blocked with a prompt to run a successful test.
Atomic Create/Update with Rollback
Given an admin clicks Enable SSO When the system creates or updates the connection via TourEcho auth APIs Then the operation is executed as a single atomic transaction with an idempotency key; on any error the connection state is unchanged and the user sees a failure summary. Given a partially valid configuration When server-side validation fails Then all changes are rolled back, a clear error explains the failing field(s), and the wizard returns the admin to the relevant step with values intact. Given a network timeout or API error When the admin retries Then the idempotency key prevents duplicate connections and eventual success reflects a single enabled connection.
Inline Validation, Help, and Performance
Given any required field is empty or invalid When the admin moves focus away or attempts Next Then inline field-level errors appear within 200 ms and prevent progress until resolved. Given help icons are clicked When the admin opens contextual help Then provider-specific instructions and embedded code blocks (ACS URL, EntityID, Redirect URI, JWKS URL) render within 300 ms and are copyable. Given typical lab network conditions (≈50 ms latency) When completing the wizard end-to-end with correct inputs for a supported IdP Then median time to complete is ≤ 8 minutes measured across 10 runs.
One-Click Metadata Exchange
"As a broker-owner admin, I want to exchange metadata in one click so that I avoid manual copy/paste errors and stay ahead of certificate changes."
Description

Enable admins to generate and download SP metadata (SAML) or OIDC client configuration, and to import IdP metadata via URL or XML for automatic parsing of certificates, endpoints, and bindings. Where supported, allow a single action to fetch and apply remote metadata updates, with scheduled refresh for certificate rotations and expiry warnings. Expose a copy-ready Redirect URI, EntityID, and well-known endpoints. Integrates with certificate stores and verifies signatures during import. Expected outcome: eliminates manual data entry and keeps connections healthy through automated updates.

Acceptance Criteria
Generate SP Metadata (SAML) for New Connection
Given an admin opens the IdP Templates wizard and selects SAML When the admin clicks Generate SP Metadata Then a metadata XML file is generated and downloaded within 2 seconds as application/samlmetadata+xml And the XML validates against SAML 2.0 metadata schema And EntityID equals the value shown in the UI for this workspace And AssertionConsumerService includes the displayed Redirect/ACS URL with HTTP-POST binding And SingleLogoutService is included only if logout is enabled; otherwise omitted And KeyDescriptor with signing certificate is included only if SP signing is enabled, sourced from the platform certificate store And the file name follows <org>-<env>-sp-metadata.xml
Import IdP Metadata via URL with Signature Verification
Given a reachable HTTPS IdP metadata URL is provided When the admin clicks Import Then the system fetches the URL following up to 3 redirects with a 10-second timeout And verifies the XML signature if present using the embedded certificate or a trusted chain And rejects the import if the signature is invalid with a clear error message And if no signature is present, shows a warning and requires explicit confirmation to proceed And on success, parses and displays IdP entityID, SingleSignOnService endpoints (HTTP-Redirect and HTTP-POST), NameIDFormats, and signing certificates And persists parsed certificates to the platform certificate store and records their fingerprints And writes an audit log entry with URL, certificate fingerprints, and admin ID
Import IdP Metadata via XML File and Parse Bindings
Given an admin uploads an .xml metadata file up to 2 MB When the admin clicks Validate & Import Then the XML is validated against SAML 2.0 metadata schema and fails with line/column errors if invalid And the system parses HTTP-Redirect and HTTP-POST bindings, entityID, NameIDFormats, and signing certificates And if multiple certificates are present, the cert with the latest NotBefore is set as active and others stored as secondary And parsed values are shown in a confirmation screen where endpoints can be edited before save And on save, certificates are stored in the certificate store and their fingerprints displayed in the UI And an audit log records file checksum, active certificate fingerprint, and admin ID
One-Click Remote Metadata Refresh and Apply Changes
Given a configured connection with a metadata URL and auto-refresh enabled When the admin clicks Refresh Now Then the system fetches the latest metadata and compares it to the current configuration And applies changes atomically, updating endpoints and rotating signing certificates without downtime And preserves existing attribute/claim maps and group mappings And if a new signing certificate overlaps with the current one, both are accepted during the overlap window And a diff summary (endpoints changed, certificates added/removed) is shown and saved to the audit log And on validation failure, no changes are applied and a rollback occurs automatically with an error message
Scheduled Metadata Refresh and Certificate Rotation Handling
Given auto-refresh is scheduled daily at an off-peak window When the remote IdP publishes a new signing certificate with a future NotBefore Then the system stages the new certificate as secondary and accepts both old and new until NotBefore is reached And automatically promotes the new certificate at NotBefore and retires the old at NotAfter And sends expiry warnings at 30, 7, and 1 days before the current certificate’s NotAfter via in-app alert and email And if three consecutive scheduled fetches fail, auto-refresh is paused and an alert is raised And all refresh outcomes are logged with timestamps and certificate fingerprints
Expose Copy-Ready Redirect URI, EntityID, and Well-Known Endpoints
Given an admin opens the connection details page When the page renders Then the UI displays copy buttons for Redirect/ACS URI(s), EntityID (SAML), and OIDC issuer/.well-known endpoints And clicking Copy places the exact value on the clipboard and shows a confirmation toast And OIDC discovery URL, authorization, token, userinfo, and jwks_uri endpoints are shown and reachable (HTTP 200) from the admin’s network And secrets are never revealed in plain text beyond initial creation events And a Download button is available for SAML metadata XML and OIDC client JSON
OIDC Client Configuration Generation and Download
Given an admin selects OIDC in the wizard When the admin clicks Generate Client Then a client_id is created and a client_secret is generated, shown once and copyable, then stored hashed And redirect_uris and post_logout_redirect_uris match the values shown in the UI And the client supports authorization_code with PKCE; implicit is disabled by default And a JSON file with client configuration (including discovery URL) downloads within 2 seconds as application/json And the JWKS endpoint is reachable (HTTP 200) and returns at least one active signing key And generating a new secret invalidates the previous one and is recorded in the audit log
Claims and Role Mapping UI
"As an Insights Integrator Ivy, I want template-based claim and group-to-role mapping so that users land with the correct permissions on first login."
Description

Provide an interface with provider-specific, copy-paste-ready claim/attribute maps that align IdP attributes (email, name, groups) to TourEcho user fields and roles (Agent, Broker Owner, Insights Integrator, Expansion Ops). Support default mappings per template with the ability to override, preview resolved claims from a test assertion, and define group-to-role rules. Include JIT user creation toggles with safe defaults and domain allowlists. Expected outcome: ensures the right users receive correct access without custom scripting, reducing support tickets and mis-permissioning.

Acceptance Criteria
Okta Template Defaults Saved and Persisted
Given I am configuring an Okta IdP using the IdP Template When I select Use Defaults for claim/attribute mappings and click Save Then the mapping shows TourEcho.email -> email, TourEcho.firstName -> given_name, TourEcho.lastName -> family_name, TourEcho.groups -> groups And the UI confirms Saved successfully And reloading the configuration shows the same persisted mappings
Azure AD Email Mapping Override with Preview Validation
Given I have loaded the Azure AD template with default mappings When I change the email mapping to userPrincipalName and upload a valid test assertion containing userPrincipalName Then the Preview displays resolved Email equal to the assertion's userPrincipalName value And clicking Save persists the override And re-opening Preview uses the override without errors
Group-to-Role Rules Resolve Highest-Priority Role
Given I define group-to-role rules with explicit priority: (1) groups contains "BrokerOwners" -> Broker Owner, (2) groups contains "ExpansionOps" -> Expansion Ops, (3) groups contains "Insights" -> Insights Integrator, (4) groups contains "Agents" -> Agent When I preview a test assertion whose groups include ["Agents","BrokerOwners"] Then exactly one role is assigned according to highest priority and the resolved role is Broker Owner And if no rules match, the user is not assigned a role and sign-in is blocked
JIT Provisioning Requires Domain Allowlist and Enforces It
Given JIT User Creation is disabled by default When I toggle JIT on without adding any allowed email domains and click Save Then Save is blocked and a validation error states a domain allowlist is required When I add ["brokco.com","teamexpand.com"] to the allowlist and click Save Then a first-time user with verified email alice@brokco.com is created on sign-in and assigned the resolved role And a first-time user with email bob@other.com is not created and sign-in is denied And existing users can sign in whether JIT is on or off
Copy Provider-Specific Claim Map to Clipboard (OneLogin)
Given I have selected the OneLogin template and reviewed the mapping table When I click Copy Mapping Then the clipboard contains valid JSON mapping that maps provider claims ["email","given_name","family_name","groups"] to TourEcho fields ["email","firstName","lastName","groups"] And the copied JSON matches the on-screen mapping exactly And a confirmation toast indicates the mapping was copied
Parse and Preview SAML and OIDC Test Assertions with Error Handling
Given protocol SAML is selected When I upload a valid SAML XML assertion containing the mapped claims Then the Preview shows extracted values for email, firstName, lastName, groups and the resolved role without errors Given protocol OIDC is selected When I paste a valid JWT containing the mapped claims Then the Preview shows extracted values and the resolved role without errors When I upload or paste a malformed assertion Then a descriptive parse error is shown and Save is disabled
Multi-Tenant IdP Configuration
"As Expansion Ops Ethan, I want to configure different IdPs for different offices so that I can roll out SSO in phases without disrupting existing logins."
Description

Support multiple IdP connections per organization and per office, with scoping by email domain, login hint, or dedicated subdomain. Allow marking a default connection and enabling/disabling at the office level to accommodate phased rollouts. Ensure isolation of secrets and metadata per tenant, and expose an admin view to manage connections across offices. Expected outcome: enables large brokerages to onboard offices incrementally and operate mixed IdP environments without friction.

Acceptance Criteria
Multiple IdPs per Organization and Office
Given an organization with no IdP connections When an admin creates 2 org-level IdPs (Okta, OneLogin) and 2 office-level IdPs (Azure AD, Google) for Office A Then the system persists 4 distinct connections with unique IDs and scopes (org or office) And the admin UI lists 4 connections with provider, protocol, scope, status=Enabled, and redacted secret fields And removing one office-level IdP does not modify or delete org-level IdPs And creating a second IdP for the same office succeeds without overwriting the first
Email Domain Scoping Routes to Correct IdP
Given Office A has IdP A scoped to domains ["northboro.com","agents.northboro.com"] and IdP B scoped to ["southboro.com"] When a user enters jane@agents.northboro.com on the login page Then the login flow initiates against IdP A's authorization endpoint And when a user enters bob@SOUTHBORO.com (case-insensitive) Then the login flow initiates against IdP B And when a user enters user@unknown.com Then the system uses the configured default connection for that office, or the org default if the office has none
Login Hint Scoping Overrides Domain
Given an office with two IdPs where IdP Azure is tagged with login hint key "vendor" value "azure" When the login URL includes login_hint=vendor%3Dazure Then the flow routes to the Azure IdP regardless of the email domain entered And when login_hint does not match any configured IdP Then the flow ignores the hint and applies standard routing rules
Dedicated Subdomain Scoping
Given subdomain east.tourecho.com is mapped to Office East and Office East has IdP C configured When a user navigates to https://east.tourecho.com/login Then the SSO start routes to IdP C without requiring email pre-entry And when a request targets an unmapped subdomain Then the system returns a friendly error page and does not expose IdP configuration details
Routing Precedence and Fallback
Given subdomain mapping, login_hint, and email domain scoping are all configured for an office When multiple scopes could apply to a login attempt Then routing precedence is: dedicated subdomain > login_hint > email domain > office default > org default And if no match is found and no default exists at either scope Then the system blocks SSO initiation with HTTP 400 and an admin contact message
Default Connection and Office-Level Enable/Disable
Given an office with three IdPs where IdP X is marked Default and IdP Y is Disabled When a user with an unmatched email initiates login Then IdP X is used if it is Enabled And if the office Default is Disabled, the system uses the org Default if present, otherwise blocks with HTTP 400 And only one Default is permitted per office; attempts to mark a second Default are rejected with HTTP 409 And changes to Default or Enable/Disable state take effect for new sessions within 60 seconds
Admin View with Tenant Isolation
Given an org admin and two offices (A and B) each with distinct IdP connections When the admin opens the IdP Connections view Then they can filter by office, provider, protocol (SAML/OIDC), status, and search by connection name And secrets (client_secret, signing keys) are write-only: after save, values are redacted and unretrievable via UI/API And an office-scoped admin for Office A cannot view or modify Office B's IdP details (HTTP 403) And all create/update/delete/enable/disable/default actions are audit-logged with actor, action, target, timestamp, and IP address
SSO Diagnostics and Audit Logging
"As a broker-owner admin, I want clear diagnostics and audit logs so that I can resolve SSO issues quickly without needing engineering support."
Description

Provide real-time diagnostics with human-readable error messages (e.g., clock skew, audience mismatch, signature invalid), a timeline of recent SSO events, downloadable debug logs (redacting secrets), and proactive alerts for certificate expiry and metadata fetch failures. Include a test assertion viewer and a self-check to validate binding, ACS/redirect URIs, and time sync. Integrates with the platform’s observability stack and admin notifications. Expected outcome: shortens time-to-resolution for SSO issues and increases admin confidence.

Acceptance Criteria
Real-Time Human-Readable SSO Diagnostics
Given a failed SSO attempt due to clock skew, audience mismatch, or invalid signature, When an admin opens the diagnostics panel for that attempt, Then within 5 seconds the panel shows a human-readable error with details: for clock skew the detected offset (seconds) and remediation link; for audience mismatch the expected vs. received audience; for invalid signature the cert fingerprint and reason; And a correlation ID and local timestamp are displayed; And no secrets (tokens, keys, passwords) are shown.
SSO Event Timeline with Filters and Latency
Given an admin opens the SSO timeline, When there are ≥50 events in the last 24 hours, Then the 100 most recent events are listed newest-first with columns: local timestamp, user identifier (or unknown), IdP, result, error type (if any), and end-to-end latency (ms); And the admin can filter by IdP, result, and time range; And selecting an event opens its detailed view linked by correlation ID.
Secure Downloadable Debug Logs with Redaction
Given an admin selects a time range and requests a log download, When generation completes, Then a file is available within 15 seconds in JSONL and text formats; And secrets (tokens, private keys, client secrets, auth codes, passwords, cookies) are replaced with "[REDACTED]" verified by pattern checks; And only events within the range are included; And the download is capped at 50 MB with pagination for larger ranges.
Proactive Alerts for Certificate Expiry and Metadata Failures
Given an IdP signing certificate will expire in ≤30 days, When the threshold is crossed, Then an in-app banner and email are sent to org admins within 1 hour including subject, fingerprint, and expiry date, with reminders at 7 days and 1 day; Given metadata fetch fails 3 times within 15 minutes, When the third failure occurs, Then an alert is sent including the last error and next retry time; And alerts deduplicate to ≤1/hour per IdP and auto-clear on recovery.
Test Assertion Viewer for SAML/OIDC
Given an admin pastes a base64 SAMLResponse or OIDC ID token and selects the IdP config, When Validate is clicked, Then the viewer displays issuer, audience, subject, NotBefore, NotOnOrAfter, and signature/alg verification result; And mapped claims (email, name, groups) show pass/fail against the configured claim map with remediation hints on mismatch; And malformed input returns a clear error; And the raw input is not persisted and is discarded after the session.
Self-Check for Binding, URIs, and Time Sync
Given an admin runs Self-Check on an IdP template, When the check executes, Then it verifies bindings (HTTP-Redirect/POST) match metadata, ACS and redirect URIs respond with 200/302, clock offset between TourEcho and the IdP is ≤60 seconds, and audience/issuer values match; And results show Pass/Fail per check with remediation steps and links; And the self-check completes in ≤30 seconds.
Observability Integration with Metrics, Logs, and Traces
Given SSO traffic occurs and observability exporters are enabled, When events are processed, Then metrics are emitted: login_success_total, login_failure_total (by error_type, idp), auth_latency_ms histogram, metadata_fetch_failure_total; And structured logs include correlation_id, user_id (if known), idp, result/error_type without unredacted secrets; And traces span SP initiation through assertion validation with success/failure annotation; And a default 24h dashboard is available.

Role Blueprints

Opinionated, least‑privilege role bundles mapped to TourEcho personas (Agent, Listing Coordinator, Broker‑Owner, Compliance Admin). Link each blueprint to IdP groups so new users land with the right permissions by default—no manual tuning, fewer access tickets, and consistent controls across teams.

Requirements

Blueprint Catalog & Default Assignment
"As a Compliance Admin, I want standardized role blueprints so that new and existing users consistently receive least‑privilege access across listings and teams without manual configuration."
Description

Deliver a managed catalog of opinionated, least‑privilege role blueprints aligned to TourEcho personas (Agent, Listing Coordinator, Broker‑Owner, Compliance Admin). Provide UI and API to view, compare, and assign blueprints at tenant, team, and market levels. Allow setting tenant‑wide defaults so newly provisioned users land on the correct blueprint without manual tuning. Each blueprint encapsulates scoped permissions across TourEcho domains (listings, showings, QR feedback capture, AI summaries, team/org settings), ensuring consistent controls, faster onboarding, and fewer access tickets.

Acceptance Criteria
Compliance Admin browses Blueprint Catalog in UI
Given I am authenticated as a Compliance Admin on tenant T When I open Admin > Roles & Access > Blueprints Then I see a catalog listing the managed blueprints: Agent, Listing Coordinator, Broker-Owner, Compliance Admin And each blueprint entry displays: name, persona, version, description, domains covered (listings, showings, QR feedback capture, AI summaries, team/org settings), and permission count And non-admin users see only their assigned blueprint details and no Assign controls (hidden/disabled)
Compare two blueprints side-by-side in UI
Given I am on the Blueprint Catalog page as a Compliance Admin When I select two blueprints and click Compare Then I see a side-by-side diff of domain permissions with clear add/remove indicators And the comparison shows totals of permissions added/removed per domain And I can close the comparison to return to the catalog without losing selection state
API exposes catalog and assignment operations
Given I have a valid admin API token for tenant T When I call GET /v1/blueprints Then the response is 200 with an array including the four managed blueprints and fields: id, name, persona, version, domains[], permissions[] When I call PUT /v1/assignments with body {scopeType, scopeId, blueprintId} Then I receive 200 and the assignment persists (verifiable via GET /v1/assignments?scopeType={..}&scopeId={..}) And all endpoints enforce RBAC so non-admin tokens receive 403
Assign blueprint at tenant, team, and market levels in UI
Given I am a Compliance Admin on tenant T When I assign blueprint B at the tenant level Then all users without a more specific assignment inherit B When I assign blueprint C to team X and blueprint D to market M Then users in team X receive C; users in market M receive D; users in both receive the more specific scope (team over market) And the UI for any user shows their effective blueprint and the source scope
Default blueprint applied on new user provisioning
Given tenant T has its default blueprint set to B and user U has no explicit assignment When U is provisioned to tenant T via SSO or API Then U is automatically assigned blueprint B before first login And if no tenant default is configured, U receives no access and Compliance Admins are notified to configure a default
Least-privilege permission enforcement by blueprint
Given user U has the Agent blueprint When U performs allowed actions (e.g., view own listings, schedule showings on own listings, submit QR feedback, view AI summaries for own listings) Then the actions succeed (HTTP 2xx/UI success) When U attempts restricted actions (e.g., edit team/org settings, delete listings not owned, access AI summaries for other agents' listings) Then the actions are denied (HTTP 403 or UI controls disabled with explanatory tooltip)
Audit logging of blueprint assignments and default changes
Given a blueprint assignment is created/updated/removed at any scope or the tenant default is changed When the change is saved Then an audit log entry records actor, timestamp, scopeType, scopeId, oldBlueprintId, newBlueprintId, and optional reason And GET /v1/audit?category=access returns the entry within 5 seconds And the Admin UI Audit Log lists the event with filters for actor, scope, and date
IdP Group Linking & JIT/SCIM Provisioning
"As an IT Admin, I want to link IdP groups to TourEcho role blueprints so that users receive correct permissions automatically at first login and changes propagate without manual intervention."
Description

Enable mapping external IdP groups (Okta, Azure AD, Google Workspace) to TourEcho role blueprints. On SSO (JIT) or SCIM events, automatically assign/update user blueprints and deactivate access on offboarding. Support multiple group matches with deterministic precedence, testable mappings, and dry‑run previews. Handle group renames/ID changes, and offer health monitoring for sync status. Reduce ticket volume by making access fully lifecycle‑driven and consistent with enterprise identity sources.

Acceptance Criteria
JIT SSO assigns blueprint from IdP group mapping
Given an IdP group-to-blueprint mapping exists (e.g., IdP group "Okta:Agent" mapped to blueprint "Agent") And a user in that IdP group has no existing TourEcho account When the user authenticates to TourEcho via SSO (JIT) Then a TourEcho user is created and the "Agent" blueprint is assigned within 30 seconds of successful SSO And the user's effective permissions equal exactly the "Agent" blueprint permission set And an audit log entry records source=JIT, evaluated groups, assigned blueprint, and timestamp And subsequent SSO attempts are idempotent (no duplicate accounts; same blueprint retained unless IdP membership changes)
SCIM create/update assigns and updates blueprint
Given an IdP group-to-blueprint mapping is configured When a SCIM POST /Users payload for a new user is received with mapped group membership Then the user is created (if not present) and the mapped blueprint is assigned within 2 minutes of SCIM receipt When a SCIM PATCH updates the user's group memberships Then the user's blueprint is recalculated and updated to reflect the new highest-precedence mapped group within 2 minutes And an audit log entry records the SCIM event, evaluated groups before/after, previous vs new blueprint, and timestamp And if mapped group membership changes, no orphaned permissions remain from the previous blueprint after the update completes
Lifecycle offboarding disables access and revokes sessions
Given a user is provisioned in TourEcho via IdP integration When a SCIM DELETE /Users event is received, or the user is disabled in the IdP, or the user is removed from all mapped groups Then the TourEcho account is deactivated within 5 minutes And any active sessions are revoked within 5 minutes And subsequent SSO attempts are denied with an appropriate error And re-provisioning via SCIM or restored group membership re-enables access and reassigns the correct blueprint on the next event
Deterministic precedence resolves multiple matching IdP groups
Given multiple IdP groups map to different blueprints and an admin-configurable precedence list is defined And a user is a member of more than one mapped IdP group When the user's blueprint is calculated (via JIT SSO or SCIM) Then the blueprint with the highest precedence is assigned deterministically and consistently across runs And in case of a precedence tie, the system breaks ties by ascending blueprint key to ensure determinism And the decision rationale (evaluated groups, precedence order, chosen blueprint) is recorded in the audit log And changing the precedence list takes effect on the next calculation and triggers re-evaluation within 5 minutes
Dry-run preview validates mappings without applying changes
Given an administrator provides a target user identifier or a set of IdP group IDs/names in the mapping tool When the administrator executes a dry-run preview Then the system returns the blueprint that would be assigned, the evaluated groups, the precedence path, and any conflicts or unmapped groups And no user records, permissions, or assignments are changed And the preview response completes within 2 seconds at p95 latency And the preview can be exported (JSON) for test evidence
Resilience to IdP group rename or ID change
Given mappings are stored against the IdP group's immutable identifier (ID) when available When the IdP group displayName is renamed Then existing mappings continue to function without admin action and no user assignment churn occurs When the IdP reissues a new group ID but retains SCIM externalId (or provides an alias to the prior ID) Then TourEcho auto-relinks the mapping within 10 minutes and logs a warning with reconciliation details If auto-relink is not possible, then a health alert is raised within 10 minutes identifying the broken mapping and listing impacted users
Sync health monitoring and alerts
Given health monitoring is enabled with alert recipients and/or webhooks configured When last successful SCIM delivery or JIT assignment is older than 10 minutes, or error rate over the last 15 minutes exceeds 5%, or backlog exceeds 100 pending events Then an alert is sent to configured channels within 2 minutes and the status is reflected on the in-app Sync Health dashboard And the dashboard displays per-IdP metrics: lastSuccessAt, errorRate, backlogSize, avgLatency, and recent failures with error codes And health events and audit logs are retained for at least 90 days
Least‑Privilege Permission Matrix
"As a Compliance Admin, I want least‑privilege permissions encoded in each blueprint so that users can do their jobs while sensitive data and admin functions remain protected."
Description

Define a granular, auditable permission matrix that powers each blueprint with explicit allows/denies over TourEcho capabilities and scopes: viewing/creating/editing showings, accessing listing details, reading/writing QR feedback, viewing AI sentiment summaries, exporting reports, and administering org settings. Support scoping by listing, team, and market, with default‑deny and inheritance rules. Provide a policy engine that evaluates effective permissions from blueprint + overrides, ensuring minimum necessary access for every persona.

Acceptance Criteria
Default-Deny Enforcement
Given a user with no assigned blueprint or overrides When they request any capability on any scope Then the policy engine returns Deny with HTTP 403 and no data exposure Given a blueprint without an explicit allow for a capability When a user with that blueprint requests that capability on any scope Then the decision is Deny Given no explicit allow exists at any level of the scope hierarchy (listing, team, market, org) When the user requests the capability Then the decision is Deny
Effective Permission Resolution Ordering
Given an Allow at market scope and a Deny at team scope for the same capability When the user requests the capability on a resource in that team Then the decision is Deny and the team-scope rule is identified as the reason Given Allows at market and team scopes and a Deny at listing scope for the same capability When the user requests the capability on that listing Then the decision is Deny and the listing-scope rule takes precedence Given a Deny at market scope and an Allow at team scope for the same capability When the user requests the capability on a team resource Then the decision is Allow and the team-scope rule takes precedence Given conflicting Allow and Deny at the same scope level for the same capability When evaluated Then Deny takes precedence
Scope Hierarchy and Inheritance
Given a market-level Allow for view_listing_details When the user attempts to view any listing within that market Then the decision is Allow Given the same user attempts to view a listing outside that market Then the decision is Deny Given a team-level Deny for edit_showings where a market-level Allow exists for edit_showings When the user attempts to edit a showing on a listing owned by that team Then the decision is Deny Given a listing-level Allow for read_qr_feedback and a team-level Deny for the same capability When the user requests read_qr_feedback on that listing Then the decision is Allow because listing scope is more specific than team
Agent Blueprint Minimum Necessary Access
Given a user with the Agent blueprint and no overrides When they attempt to view listing details for listings where they are the assigned agent or part of the assigned team Then the decision is Allow Given the same user attempts to edit showings only for their assigned listings Then the decision is Allow Given the same user attempts to edit showings for listings not assigned to them or their team Then the decision is Deny Given the same user attempts to read_qr_feedback and view_ai_sentiment for their assigned listings Then the decision is Allow Given the same user attempts to export_reports or administer_org_settings Then the decision is Deny
Compliance Admin and Broker-Owner Controls
Given a user with the Compliance Admin blueprint When they attempt to update the permission matrix, role blueprints, or overrides Then the decision is Allow Given the same user attempts to create_showings, edit_showings, or write_qr_feedback Then the decision is Deny Given a user with the Broker-Owner blueprint When they attempt to export_reports at market scope and view_ai_sentiment_summaries Then the decision is Allow Given the same user attempts to administer_org_settings outside their organization Then the decision is Deny
Auditability and Permission Explainability
Given any access decision is evaluated When the policy engine logs the decision Then the log includes user_id, blueprint_id(s), requested_capability, scope_ids (listing_id/team_id/market_id), decision (Allow|Deny), rule_ids applied, and timestamp Given a Compliance Admin requests an explanation for a denied request by user U for capability C on scope S When the system generates an explanation Then it lists the applicable rules in evaluation order and identifies the winning rule and scope Given a non-admin user attempts to access audit logs Then the decision is Deny Given audit logs are exported When a Broker-Owner or Compliance Admin requests export with date range and filters Then a CSV file is generated within 60 seconds including only matching records
Policy Engine Performance and Consistency
Given 10,000 sequential policy evaluations against a warm cache When measured Then p95 evaluation latency per decision is <= 30 ms and p99 <= 60 ms Given a blueprint or override change is committed When the system propagates policy updates Then all services reflect the change within 60 seconds and new decisions use the updated rules Given concurrent requests with identical inputs When evaluated Then the engine returns consistent decisions across instances Given the policy engine encounters a malformed rule When evaluated Then the rule is ignored, Deny-by-default is applied, and an error is logged
Per‑User Overrides with Expiry & Approval
"As a Listing Coordinator, I want temporary elevated access to a specific listing so that I can complete time‑sensitive tasks without permanently changing my role."
Description

Allow limited, auditable exceptions layered on top of a user’s blueprint. Supports time‑boxed access, required justification, approver workflow, and optional second‑factor confirmation for high‑risk scopes. Provide visibility into active/expired overrides, notifications before expiry, and one‑click revoke. Ensure overrides are represented in effective permission evaluation and exportable for audits, enabling agility for special cases without eroding least‑privilege baselines.

Acceptance Criteria
Time‑Boxed Override Activation & Auto‑Expiry
Given an override with scopes S, start time T_start (optional), and end time T_end in the future When the current time reaches T_start or immediately if no T_start is set Then the override becomes active and grants S on top of the user’s blueprint within 60 seconds And all override times are stored in UTC and displayed in the user’s profile timezone And the override cannot be saved without an end time And the override cannot be saved if T_end <= T_start (when T_start is provided) or T_end <= now When the current time reaches T_end Then the override is deactivated and all granted scopes are removed within 60 seconds And activation and expiry events are recorded in the audit log
Justification Is Mandatory and Immutable
Given a user submits an override request When the justification field is empty or whitespace-only Then the request is rejected with a validation error and no override is created When a non-empty justification is provided Then it is saved immutably, included in audit logs/exports, and cannot be edited after submission And any attempt to change the justification requires creating a new override or a new version And the stored justification is sanitized to prevent script execution
Approval Workflow for Overrides
Given a policy that requires approval for overrides When a requester submits an override Then the override status is Pending and grants no additional permissions until approved And the designated approver receives a notification to review When an approver who is not the requester approves Then the status becomes Approved and the override activates at or after T_start When an approver denies Then the status becomes Denied and the override never activates And approver identity, decision, timestamp, and optional comment are recorded in the audit log
Second‑Factor Confirmation for High‑Risk Scopes
Given an override includes one or more high-risk scopes When the override is about to activate for the target user Then the user must successfully complete a second-factor challenge before the scopes take effect When the second-factor challenge fails or times out Then the override remains inactive and no high-risk permissions are granted And the 2FA outcome (success/failure, method, timestamp) is recorded for audit
Pre‑Expiry Notifications and Extension Handling
Given an active override with end time T_end When T_end is 24 hours away Then the requester, target user, and approver receive pre-expiry notifications via in-app and email channels When the override is extended before expiry Then the new T_end is saved, notifications are rescheduled, and the audit log records the change with who/when/why When the override expires Then an expiration notification is sent to the same parties and no permissions from the override remain active
One‑Click Revoke of Active Override
Given an authorized role (Compliance Admin or Broker‑Owner) views an active override When they select One‑Click Revoke Then the override is immediately deactivated and all granted permissions are removed within 60 seconds across all active sessions And the target user and approver are notified of the revocation And the revoker identity, timestamp, and optional reason are recorded in the audit log
Effective Permissions and Audit Export Accuracy
Given a user with a base blueprint and zero or more active overrides When effective permissions are retrieved via UI or API Then the result shows the union of blueprint permissions plus active override scopes and indicates the source per permission with associated expiry times When overrides are exported for a date range Then the export (CSV and JSON) includes user, requester, approver, justification, scopes, start/end times, status history, 2FA outcomes, and timestamps And export filters for active/expired, user/team, and scope match the on-screen counts and detail
Blueprint Versioning, Staging & Rollback
"As a Compliance Admin, I want to update a blueprint with staging and rollback so that I can improve controls safely and revert quickly if problems occur."
Description

Introduce version control for role blueprints with draft/edit, change diffs, and impact previews showing affected users/teams. Support staged rollout to a pilot cohort, scheduled deployment windows, and instant rollback to the prior version if issues arise. Maintain release notes and a full change history for compliance. This enables iterative hardening of controls without disrupting operations.

Acceptance Criteria
Create and Save Draft Blueprint Version
Given I am a Compliance Admin or Broker‑Owner with permission to manage Role Blueprints When I create a new version from an existing blueprint Then the new version is saved with a unique incremented version identifier and status Draft And no permission changes are applied to any user until the draft is published And the Draft appears in Version History with version ID, author, timestamp, and status And I can edit permissions, scopes, and IdP group mappings within the Draft and save without publishing And saves with invalid permission codes or unmapped IdP groups are blocked with clear validation errors
View Change Diff Between Versions
Given a blueprint has at least two versions When I select two versions (vX and vY) and choose View Diff Then the diff lists added, removed, and modified permissions, scopes, and IdP group mappings And a summary count of changes by type (added/removed/modified) is displayed And potential privilege escalations (e.g., new admin scopes) are flagged as High Risk And the diff is exportable as JSON And comparing the same pair of versions always yields identical results
Impact Preview of Affected Users and Teams
Given I have a Draft with proposed changes When I run Impact Preview Then the system shows counts of affected users and teams segmented by IdP group And for each affected user, the preview shows current vs proposed permissions delta And the preview applies no changes to production permissions And results are exportable to CSV And if a pilot cohort is configured, users outside the cohort are excluded from the preview
Staged Rollout to Pilot Cohort
Given a Draft passes validation When I select a Pilot Cohort (IdP groups and/or explicit users) and publish to Stage Then only members of the Pilot Cohort receive the new permissions And all other users retain the previously active version And changes to cohort membership in the IdP are respected on the next sync And an activity log entry records the cohort definition, author, and timestamp And the staged rollout can be safely rolled back without affecting non‑pilot users
Scheduled Deployment Window Enforcement
Given a Draft is ready for general deployment When I schedule a deployment with start time, optional end window, and timezone Then the system validates no conflicting deployments exist for the same blueprint And the new version becomes active at the scheduled start time in the specified timezone And no changes are applied before the start time And I can cancel or reschedule the deployment any time before the start time And notifications are sent to configured admins upon scheduling and activation
Instant Rollback to Prior Version
Given a version is active or staged When I initiate Rollback and confirm Then the immediately prior version becomes active and affected users’ permissions revert accordingly And any pending schedules for the rolled‑back version are canceled And the rollback is recorded with author, timestamp, reason, and affected counts And the rollback completes within 2 minutes for tenants up to 10,000 users And notifications are sent to configured admins upon completion
Release Notes and Full Change History for Compliance
Given I publish, stage, schedule, or rollback a version When I attempt the action Then non‑empty release notes (minimum 20 characters) are required to proceed And an immutable audit log entry is created capturing version ID, action, author, timestamp, diff summary/hash, and impacted counts (with optional external ticket ID) And history is filterable by date, action, author, and version and exportable to CSV/JSON And audit entries cannot be edited or deleted by any tenant user And Compliance Admins can view and export the full change history
Audit Logs & Access Compliance Reports
"As a Broker‑Owner, I want comprehensive audit logs and access review reports so that we can demonstrate compliance and quickly investigate access issues."
Description

Capture immutable logs for all blueprint assignments, IdP mapping changes, overrides, and effective permission evaluations (who, what, when, why, approver). Provide filters and exports (CSV/JSON) and optional SIEM forwarding (syslog/webhook). Generate periodic access review reports by team/market/blueprint, with attest/recertify workflows to meet internal policy and regulatory requirements.

Acceptance Criteria
Immutable audit log for blueprint assignment events
Given a Compliance Admin or Broker-Owner assigns or removes a Role Blueprint for a user via UI or API When the change is submitted and succeeds Then an immutable audit record is created within 5 seconds containing: actor_id, actor_role, target_user_id, operation (assign/remove), blueprint_id, team_id, market_id (if applicable), timestamp (ISO 8601 UTC), request_id, channel (UI/API), source_ip, reason (string, optional), approver_id (nullable), outcome=success And the record cannot be edited or deleted by any role; attempts return 403 and generate a separate tamper-attempt log entry And the audit record is retrievable via Audit Log UI and API by authorized roles with exact-field filtering on actor_id and target_user_id And the same operation performed twice creates two distinct records with unique request_id values
Audit logging for IdP group-to-blueprint mapping changes
Given a Compliance Admin updates IdP mapping rules linking IdP groups to Role Blueprints When a mapping is created, updated, or deleted Then an audit record is written with operation (create/update/delete), idp_provider, idp_group_id, previous_mapping (nullable JSON), new_mapping (nullable JSON), actor_id, timestamp (ISO 8601 UTC), request_id, reason (required if policy flag enforce_change_reason=true), approver_id (nullable), outcome (success/failure), error_message (on failure) And updates that change only description fields are logged distinctly from permission-impacting changes And creating a mapping immediately reflects in effective mapping within 60 seconds and the audit record cross-references the mapping version_id And retrieving logs filtered by operation=create for the specified idp_group_id returns the new record
Effective permission evaluation logging with decision rationale
Given a Compliance Admin requests an effective permission evaluation for a specific user and scope (team/market) When the evaluation is run via UI or API Then an audit record is created with evaluator_id, target_user_id, inputs (blueprints, overrides, IdP groups), computed_permissions (JSON summary), timestamp (ISO 8601 UTC), request_id, purpose (free-text), channel (UI/API) And the record includes decision_rationale derived from rules (e.g., which blueprint or override granted each permission) And the evaluation result is consistent with the authorization service for a sampled action in the same scope And the record is viewable and exportable under the same filters as other audit events
Filter, search, and paginate audit logs
Given authorized users open the Audit Log UI or call the API with filters When filtering by any combination of actor_id, target_user_id, blueprint_id, operation, team_id, market_id, outcome, and date range (UTC) Then results return within 2 seconds for up to 100k matching records with consistent pagination (cursor-based), sort by timestamp desc by default, and stable ordering on ties by request_id And filters are exact-match for IDs and case-insensitive for free-text fields (reason) And the total_count is provided (or next_cursor for streaming mode) without exceeding a 3-second response time And all timestamps are displayed and exported in ISO 8601 UTC and the UI indicates the timezone explicitly
Export audit logs to CSV and JSON
Given a user applies filters to the audit log When exporting as CSV Then a UTF-8 CSV with header is generated with deterministic column order, ISO 8601 UTC timestamps, and streaming delivery for >100k rows; large exports are chunked and zipped when exceeding 50 MB And when exporting as JSON Then NDJSON is produced with the same schema keys as the API and one object per line And each export includes a manifest (JSON) with filter criteria, generated_at, row_count, checksum (SHA-256) and request_id And re-running the same export with identical filters within 5 minutes yields identical content and checksum
SIEM forwarding via syslog and webhook
Given a Compliance Admin configures a SIEM destination (syslog RFC5424 or HTTPS webhook) When forwarding is enabled Then all new audit events are delivered at-least-once in near real time (<10 seconds median) with retry (exponential backoff up to 24 hours) on failure And webhook deliveries are signed with HMAC-SHA256 using a shared secret, include timestamp and signature headers, and require TLS 1.2+ with certificate validation And syslog messages conform to RFC5424 with JSON payload in the MSG field and include facility, severity=INFO (or higher on failures), and hostname And delivery metrics (sent, retried, failed, lag) and health status are visible in the UI/API; destinations can be paused/resumed And a dead-letter queue retains undeliverable events for 30 days with re-drive capability
Periodic access review reports with attest/recertify workflow
Given a Compliance Admin schedules an access review by team, market, or blueprint with a cadence (e.g., quarterly) and a due date When the cycle starts Then reviewers (e.g., team leads or Broker-Owners) receive notifications and a report listing each user, their effective permissions, blueprints, overrides, last_login, and justification (if any) And reviewers can Attest, Revoke, or Flag each user; Revoke requires a reason; Attest captures timestamp and reviewer_id; Flag routes to Compliance Admin And progress tracking shows % complete per reviewer and overall; overdue items auto-escalate after the due date And upon cycle close, a signed immutable attestation report (PDF and CSV) is generated with checksums and is added to the audit log; records become read-only And any revocations triggered by the review generate corresponding blueprint/override changes and audit entries
Onboarding & Migration Assistant
"As an Admin, I want guided migration from existing ad‑hoc permissions to blueprints so that we standardize access quickly with minimal disruption."
Description

Offer a guided wizard to adopt Role Blueprints: inventory current users and permissions, recommend blueprint mappings, simulate impact, and bulk‑apply with scheduled rollout. Detect and surface permission drift versus blueprint baselines, propose fixes, and optionally auto‑remediate. Assist with IdP group mappings, send user communications, and provide success metrics (ticket reduction, time‑to‑access) to ensure a smooth transition from ad‑hoc roles to standardized controls.

Acceptance Criteria
User Inventory and Permission Snapshot
- Given I am an org admin with onboarding permissions - When I launch the Onboarding & Migration Assistant - Then the assistant enumerates all active users from the connected IdP and application store into a single inventory list including: user ID, email, assigned role/blueprint (if any), group memberships, last login, and resource-level permissions - And the inventory includes 100% of active users visible in the latest IdP sync - And the inventory completes for up to 10,000 users within 10 minutes - And the list supports filtering by persona, current role, group, and permission, and supports CSV export - And an audit log entry is recorded with timestamp, admin ID, user count scanned, and duration
Blueprint Mapping Recommendations
- Given an inventory is available and Role Blueprints are enabled (Agent, Listing Coordinator, Broker‑Owner, Compliance Admin) - When I request recommended mappings - Then each user is assigned a proposed blueprint with a confidence score (0–100) and rationale (e.g., group membership, activity patterns) - And at least 95% of users receive a recommendation; the remainder are flagged for manual review - And I can override any recommendation and mark exceptions with notes - And I can preview per-user permission diffs between current and proposed state before applying any change
Impact Simulation and Risk Scoring
- Given a set of users and proposed blueprints are selected - When I run a simulation - Then no changes are applied to production permissions - And I receive a report listing permissions to be added/removed per user, affected resources, and high‑risk deltas (e.g., loss of create/delete on listings) - And any permission the user used within the last 30 days that would be removed is flagged as a potential regression - And the simulation completes within 60 seconds for up to 2,000 users - And the report can be exported as CSV/PDF and stored with a unique simulation ID
Scheduled Bulk Application of Blueprints
- Given I confirm a rollout plan and select a schedule (immediate or future time window with timezone) - When the scheduled time arrives - Then blueprints are applied atomically per user and progress is tracked with success/failure counts - And at least 95% of targeted users are updated within the scheduled window; any failures are retried up to 3 times with exponential backoff - And I can phase rollout by IdP group or department and limit concurrency to avoid IdP rate limits - And I can trigger an automatic rollback to the pre‑change state for any subset that fails - And notifications are sent to admins on start, completion, and on any failure over 1% of the batch - And all changes are written to an immutable audit log with before/after diffs
Permission Drift Detection and Auto‑Remediation
- Given blueprint baselines are active - When a user’s permissions deviate from their assigned blueprint - Then the system detects the drift within 15 minutes of the change - And the drift is surfaced in a dashboard with the exact delta and timestamp - And policy options allow: notify only, propose remediation, or auto‑remediate with approval workflow - And if auto‑remediation is enabled, permissions are restored to baseline within 30 minutes unless within a configured maintenance blackout window - And exceptions can be granted for specific users/permissions with expiry dates and are excluded from drift alerts until expiry
IdP Group Mapping Assistance
- Given a supported IdP connection is configured (e.g., Okta, Azure AD) and groups are synced - When I open Group Mapping - Then all IdP groups are listed with member counts and last sync time - And the assistant suggests group‑to‑blueprint mappings with confidence scores and highlights conflicts (a group mapped to multiple blueprints) - And I can test mappings against sample users to preview resulting blueprints without applying changes - And no IdP changes are performed until I confirm and apply; upon apply, mappings are saved and reflected in the next sync - And the mapping write is validated and any API errors are surfaced with actionable messages
User Communication and Success Metrics Dashboard
- Given a rollout is scheduled or executed - When I enable user communications - Then templated emails are sent to affected users at least 24 hours before changes (or immediately for immediate rollouts), and a post‑change notice is sent after application - And delivery status and open rates are tracked per batch - And the dashboard shows success metrics: change in access ticket volume and median time‑to‑access compared to a 30‑day baseline, updated daily - And the metrics include methodology notes and date ranges, and can be exported

SCIM AutoSync

Always‑on provisioning that creates, updates, or deactivates users the moment they change in the IdP. Mirrors titles, manager, office, and license counts; reclaims seats on departure, and preserves audit history. Zero‑touch onboarding/offboarding for ops while keeping rosters clean and compliant.

Requirements

SCIM 2.0 Provisioning Endpoints
"As an IT administrator, I want TourEcho to expose standards-compliant SCIM endpoints so that my IdP can programmatically provision and manage users without manual intervention."
Description

Implement RFC 7643/7644-compliant SCIM 2.0 /Users and /Groups endpoints to enable IdPs (Okta, Azure AD, OneLogin, Google) to create, update (PUT/PATCH), read (filterable GET), and delete users and group memberships. Support the Enterprise User extension for title, manager, and department/office; honor externalId for idempotency; return ETags and support If-Match concurrency controls; implement pagination, filtering, and sorting where applicable. Ensure correct translation between IdP payloads and TourEcho domain models (agents, broker-owners, roles, offices), including preservation of immutable IDs and audit-relevant metadata. Provide high availability and rate-limit friendly behavior to handle bursty IdP syncs without data loss.

Acceptance Criteria
Idempotent User Creation via externalId
Given an IdP sends POST /Users with a unique externalId and valid minimal attributes (userName, name, emails), When the user does not yet exist, Then the API returns 201 Created with a stable id, echoes externalId, includes ETag and meta.created/meta.lastModified, and persists the user. Given the same POST /Users payload is sent again with the same externalId, When the request is processed, Then the API returns 200 OK with the existing resource, does not create a duplicate, preserves the original id and meta.created, and returns a new ETag only if any attributes changed. Given a POST /Users with a duplicate userName but different externalId, When the request is processed, Then the API returns 409 Conflict with a SCIM error response indicating uniqueness violation on userName.
Enterprise User Extension Mapping to TourEcho
Given a PATCH /Users/{id} with operations updating urn:ietf:params:scim:schemas:extension:enterprise:2.0:User fields (title, manager.value, department, costCenter, organization, division, and office mapped via department/organization), When the request references an existing manager by id or by externalId, Then the API updates the TourEcho agent's title, manager relationship, and office/department mappings, returns 200 OK with updated attributes, preserves immutable id, and emits a new ETag and meta.lastModified. Given a PATCH where manager.value cannot be resolved to an existing user, When processed, Then the API returns 400 Bad Request with scimType "invalidValue" and does not change existing data. Given a PUT /Users/{id} replacing the full document without enterprise extension fields, When processed, Then enterprise extension values previously set remain unchanged unless explicitly provided (non-provided extension fields are preserved), and id remains immutable.
User Deactivation/Deletion and Seat Reclaim
Given a PATCH /Users/{id} setting active=false, When processed, Then the API returns 200 OK, marks the user inactive, revokes access, records an audit event "UserDeactivated" with actor "system-scim", and frees one license seat within 60 seconds. Given a GET /Users/{id} after deactivation, When requested, Then the response shows active=false with unchanged id and preserved meta.created, and a newer meta.lastModified and ETag. Given a DELETE /Users/{id}, When processed, Then the API performs a soft delete preserving audit history, returns 204 No Content, frees one license seat within 60 seconds, and subsequent GET by id returns 404 with scimType "resourceNotFound" while filter searches exclude the user by default.
Group Membership Sync to Roles and Offices
Given a POST /Groups creating a group with displayName "Broker Owners", When processed, Then the API returns 201 Created with a stable group id and ETag, and subsequent GET /Groups?filter=displayName eq "Broker Owners" returns the group. Given a PATCH /Groups/{id} adding a member with value referencing a user id or externalId, When processed, Then the user is associated to the group, the API returns 200 OK with updated members and new ETag, and the user gains the mapped TourEcho role (e.g., broker-owner) within 60 seconds. Given a PATCH /Groups/{id} removing a member, When processed, Then the membership is removed idempotently (no error if user not a member), the user's mapped role/office association is revoked accordingly, and response is 200 OK. Given idempotent add of an already-member user, When processed, Then no duplicate memberships are created and the response remains 200 OK.
Users Endpoint Filtering, Pagination, and Sorting
Given multiple users exist, When calling GET /Users?filter=userName%20eq%20"alex@example.com"&startIndex=1&count=1, Then the response is 200 OK with totalResults>=1, startIndex=1, itemsPerPage=1, and Resources[0].userName=="alex@example.com". Given a prefix search, When calling GET /Users?filter=displayName%20sw%20"Alex"&sortBy=userName&sortOrder=descending&startIndex=1&count=2, Then the response is 200 OK with results sorted by userName descending, paginated to 2 items, with stable ordering across pages while ETag/version of the collection window remains unchanged during the request span. Given an unknown or unsupported filter expression, When requested, Then the API returns 400 Bad Request with scimType "invalidFilter".
ETag and If-Match Concurrency Controls
Given a client retrieved a user with ETag "W/\"v3\"", When the client sends PATCH /Users/{id} with header If-Match: W/"v3", Then the update succeeds with 200 OK and a new ETag different from "W/\"v3\"". Given the client uses a stale ETag in If-Match, When another update has already changed the resource, Then the API returns 412 Precondition Failed with a SCIM error body and does not modify the resource. Given no If-Match header is provided on PUT or PATCH, When concurrent updates are possible, Then the API processes the request but may be rate-limited by policy; clients are recommended to use If-Match (documented via WWW-Authenticate or error detail on 409/412 where applicable).
Burst Handling, Rate Limits, and High Availability
Given an IdP sends 1,000 SCIM requests within 60 seconds across /Users and /Groups, When the burst exceeds configured per-tenant limits, Then the API sustains 99.9% availability, returns 429 Too Many Requests with a Retry-After header for rate-limited calls, and never drops accepted writes silently. Given clients honor Retry-After and retry within 10 minutes, When observing resulting resources, Then all intended creates/updates are applied exactly once (no duplicates), order of operations per resource is preserved, and audit events exist for each successful mutation. Given a regional failover event during a sync window, When traffic is routed to a healthy zone, Then the SCIM endpoints remain reachable with RTO<=60s and no acknowledged requests are lost.
AutoSync Engine (Real-time, Bulk Import & Reconciliation)
"As an IT administrator, I want changes in our IdP to automatically and quickly reflect in TourEcho (including initial import and periodic reconciliation) so that our roster stays accurate with zero-touch operations."
Description

Build an always-on synchronization engine that applies SCIM changes immediately and reliably. Handle real-time IdP-initiated operations, an initial bulk import (first connect), and scheduled reconciliation to detect and heal drift (e.g., missing users, stale attributes). Provide exactly-once semantics via idempotency keys and ETags; implement queued processing with retries and exponential backoff for transient failures. Track and surface sync lag and completion metrics; guarantee near-instant propagation of create/update/deactivate actions to TourEcho (targeting sub-minute SLO).

Acceptance Criteria
Real-time SCIM Propagation (Create/Update/Deactivate)
Given a valid SCIM create with a new externalId, When processed by AutoSync, Then a TourEcho user is created with mapped title, manager, office, and licenseCounts within 60s p95 and 120s p99. Given a valid SCIM update with a matching If-Match ETag, When processed, Then only changed attributes are updated in TourEcho within 60s p95 and an audit event with correlationId is recorded. Given a valid SCIM deactivation (active=false), When processed, Then the user is deactivated in TourEcho, access revoked, and the seat marked reclaimable within 60s p95. Given multiple sequential changes for the same user, When processed, Then they are applied in order with no intermediate state lost.
Initial Bulk Import on First Connect
Given SCIM connection is authorized and initial sync is triggered, When bulk import runs, Then eligible IdP users are created/updated in TourEcho with accurate field mappings and no duplicates for the same externalId/email. Given a directory of 10,000 users with 20% existing matches, When bulk import runs, Then it completes within 30 minutes p95 and produces a summary report with counts for created, updated, deactivated, skipped, and errors. Given bulk import is re-run within 24 hours with no IdP changes, When executed, Then zero net changes occur and the summary report shows 0 errors.
Scheduled Drift Reconciliation and Healing
Given the scheduled reconciliation interval elapses, When reconciliation runs, Then missing TourEcho users present in IdP are created, stale attributes are updated to match IdP, and orphaned TourEcho users absent from IdP are deactivated per policy. Given reconciliation processes up to 10,000 users, When executed, Then it completes within 45 minutes p95 and publishes a reconciliation report with counts and error details. Given a transient dependency failure occurs mid-run, When reconciliation resumes, Then it continues from the last successful checkpoint without duplicating prior changes.
Exactly-Once Semantics (Idempotency Keys and ETags)
Given two identical deliveries of the same change share the same idempotency key or version, When processed, Then only one state change is applied and subsequent duplicates are acknowledged as no-ops. Given an update is received with an outdated ETag/version, When processed, Then the engine rejects the update without applying changes and records a 412 Precondition Failed outcome. Given concurrent updates for the same user arrive, When processed, Then operations are serialized by user and only the update with the freshest ETag/version is applied. Given idempotency keys are reused within a 24-hour window, When processed, Then duplicates are deduplicated and keys expire after the retention window.
Queued Processing, Retries, and Backoff
Given a transient error (HTTP 429/5xx or network timeout) occurs, When retrying, Then the engine retries up to 5 attempts with exponential backoff and jitter (initial 1s, capped at 60s) before moving the message to a dead-letter queue. Given messages for the same user are enqueued, When processed, Then ordering is preserved per user (FIFO by subject) to prevent out-of-order updates. Given a message is moved to the dead-letter queue after max retries, When occurred, Then an alert is emitted within 2 minutes containing correlationId, error type, and retry history. Given the downstream dependency recovers, When processing resumes, Then the backlog is drained and p95 real-time latency returns to ≤60s within 15 minutes.
Seat Reclamation and Audit Preservation on Departure
Given a user is set to inactive or deleted in IdP, When processed, Then the TourEcho user is deactivated, seat is reclaimed to the license pool, and logins are blocked within 60s p95. Given a deactivation occurs, When recorded, Then an immutable audit record is stored with timestamps, actor (system), reason, pre/post state, and correlationId retrievable for 7 years. Given a user is reactivated in IdP, When processed, Then the same TourEcho user is reactivated without creating a duplicate and prior audit history remains intact.
Metrics, Observability, and SLO Alerts
Given the system is operating, When observed, Then the following metrics are published at 1-minute granularity: p50/p95/p99 end-to-end latency by operation type, queue depth, backlog age, success/failure rates, reconciliation duration, and last successful run timestamp. Given a latency SLO breach occurs (p95 > 60s for 5 consecutive minutes), When detected, Then a high-severity alert is emitted to on-call and a status indicator is shown in the admin dashboard. Given an operator queries the API for a user by externalId, When requesting last sync status, Then the API returns the last processed change timestamp, version/ETag, and correlationId.
Deprovisioning & Seat Reclamation with Audit Preservation
"As an operations manager, I want deprovisioned users to instantly lose access and have their seats reclaimed while preserving their historical activity so that we remain compliant and control licensing costs."
Description

When a user is set inactive or deleted via SCIM, immediately revoke access, terminate sessions, and transition the account to a soft-deactivated state that preserves full audit history. Reclaim paid seats/licenses and optionally transfer ownership of listings, teams, and feedback artifacts to designated managers to avoid orphaned data. Support reactivation paths that restore prior entitlements safely. Enforce retention and chain-of-custody requirements to meet brokerage compliance while reducing license spend automatically on departures.

Acceptance Criteria
Immediate Access Revocation on SCIM Deprovision
Given an active user exists with valid UI and API sessions And the IdP issues a SCIM PATCH or DELETE setting the user to inactive When TourEcho receives and validates the SCIM request Then all active access and refresh tokens for that user are revoked within 60 seconds And any subsequent API/UI requests using those tokens return 401 Unauthorized And login attempts for the user return "Account deactivated" within 60 seconds And an audit event "access_revoked" is recorded with timestamp, actor=SCIM, request_id, and source IP
Soft Deactivation with Full Audit Preservation
Given a user is deprovisioned via SCIM When the account state transitions to soft-deactivated Then the user record remains queryable to admins with status=soft-deactivated And no audit/event/feedback records are deleted or modified And audit entries are immutable (update and delete operations on audit endpoints return 403) And an audit event "soft_deactivated" includes prior roles, seats, teams, and a cryptographic checksum And the user's historical activity remains accessible to authorized admins per org retention policy
Automatic Seat Reclamation on Departure
Given a user holds one or more paid seats/licenses When the user is deprovisioned via SCIM Then all assigned seats are released back to the org pool within 5 minutes And license counts and available seat inventory reflect the change within 5 minutes And a billing/audit event "seat_reclaimed" is recorded with seat types and quantities And no active users exceed their licensed seat counts after reclamation
Ownership Transfer to Designated Manager
Given a deprovisioned user owns listings, teams, or feedback artifacts And a designated manager is resolvable from SCIM attributes (managerId) or org default When deprovisioning is processed Then ownership of all owned objects is transferred atomically to the designated manager within 10 minutes And original creator metadata is preserved And zero orphaned objects remain (query returns 0 orphaned records) And if no manager is resolvable, objects enter a "Transfer Pending" queue and an admin alert is sent within 2 minutes
Safe Reactivation Restores Prior Entitlements
Given a user is in soft-deactivated state And the IdP reactivates the user via SCIM When TourEcho processes the reactivation Then the user's prior roles, team memberships, and feature entitlements are restored within 5 minutes if seats are available And if required seats are unavailable, restoration is blocked and an admin task "seat_needed" is created within 2 minutes And previously revoked tokens remain invalid; new sessions require reauthentication And an audit event "reactivated" links to the prior "soft_deactivated" event
Compliance Retention and Chain-of-Custody Enforcement
Given an org retention policy is configured (minimum 7 years) When a user is deprovisioned via SCIM Then audit, listings, team, and feedback records associated to the user are retained for at least the configured period And purge requests for in-scope records during the retention period are denied with 403 and logged And export of the user's audit trail produces a chronologically ordered, tamper-evident report with event IDs, timestamps, actors, and hashes And all administrator access to deactivated user data is itself audited with who, when, and what fields were viewed or exported
Idempotent SCIM Deprovision and Robust Error Handling
Given the same user is deprovisioned multiple times via SCIM with identical payloads When TourEcho processes repeated requests Then the end state remains soft-deactivated with no duplicate audit entries or seat reclamation And the SCIM API responds 200/204 consistently and returns the same resource version/ETag And if any sub-step (seat reclamation or ownership transfer) fails, the system retries with exponential backoff for up to 1 hour and surfaces partial status in the admin console and via webhook "deprovision_partial" And all failures are logged with correlation IDs for support triage
Attribute & Group Mapping with Normalization
"As an IT administrator, I want to map IdP attributes and groups to TourEcho roles, offices, and license entitlements so that users receive the correct permissions and seats automatically."
Description

Provide configurable mappings from IdP attributes and groups to TourEcho constructs: titles to roles, manager to reporting lines, department/office to office membership, and license counts to seat entitlements. Include normalization rules (e.g., case/format cleanup, default values, conflict precedence) and presets for major IdPs. Validate incoming payloads, handle missing/invalid attributes gracefully, and maintain a clear mapping audit trail so that org structure, permissions, and seat allocation mirror the IdP with minimal admin effort.

Acceptance Criteria
Title-to-Role Mapping with Normalization
Given a SCIM user payload with title " Senior AGENT " and a mapping of ["senior agent" -> Agent], When sync runs, Then whitespace is trimmed, case is normalized, and the user is assigned the Agent role. Given a SCIM user payload with a title that does not match any mapping, When sync runs, Then the user is assigned the configured default role (e.g., Viewer) and an info audit entry is recorded. Given a previously synced user whose title changes from "Assistant Broker" to "Broker Associate" and a mapping exists for both, When the next sync runs, Then the user’s role updates to the new mapped role within 60 seconds and the prior role is removed. Given a SCIM payload with title containing extra punctuation (e.g., "Sr. Agent"), When normalization rules are applied (punctuation removal enabled), Then the value maps to the same role as "Sr Agent". Given repeated identical SCIM events for the same user and title, When sync runs, Then the resulting role remains unchanged and no duplicate role-change entries are created (idempotent).
Manager-to-Reporting-Line Resolution
Given a SCIM user payload with manager attribute referencing another user by immutable ID, When sync runs, Then the user’s manager in TourEcho is set to that user if they exist. Given the referenced manager does not yet exist locally, When sync runs, Then the relationship is queued for resolution and retried until the manager is provisioned or for up to 24 hours, after which the user is set to "No Manager" and a warning audit entry is created. Given a SCIM user payload where manager references the same user (self-reference) or creates a cycle, When sync runs, Then the manager field is not applied and an error audit entry is recorded with cycle-detection details. Given an existing reporting line that changes in the IdP, When the next sync runs, Then the updated manager is reflected within 60 seconds and the old relationship is removed. Given a SCIM payload with manager email instead of ID and an email-to-user mapping rule is enabled, When sync runs, Then the manager is resolved by normalized, case-insensitive email match.
Department/Office to Office Membership via Attribute or Group
Given a SCIM user payload with department=" Denver " and a mapping ["denver" -> Office: Denver], When sync runs, Then whitespace/case are normalized and the user is placed in the Denver office. Given the SCIM user belongs to an IdP group "Office: Austin" and group-to-office mapping exists, When sync runs, Then the user is placed in the Austin office. Given both attribute and group map to offices and precedence is configured to Attribute over Group, When sync runs, Then the attribute-derived office is applied and the group-derived office is ignored with an info audit note. Given a user currently in Office: Seattle whose department changes to "Portland", When sync runs, Then the user is removed from Seattle and added to Portland in a single transaction. Given a user maps to multiple offices via multiple groups, When sync runs, Then only one office is applied according to configured tiebreaker, and a warning audit entry lists the discarded candidates.
License Count to Seat Entitlements & Seat Reclamation
Given a SCIM user payload with licenseCount=0, When sync runs, Then the user holds no paid seat and is assigned the configured non-seat role. Given licenseCount=1 for a user without a seat and seats are available, When sync runs, Then a seat is assigned within 60 seconds and recorded in audit with before/after entitlement. Given licenseCount decreases from 2 to 1, When sync runs, Then one seat is reclaimed and the user retains one seat entitlement. Given licenseCount is negative or non-integer (e.g., " -1 " or "two"), When normalization/validation runs, Then the value is coerced to 0 and a warning audit entry is created. Given the account seat cap is reached and an incoming user has licenseCount>0, When sync runs, Then no seat is assigned, the user is placed in a Pending Seat state, and an alert/audit entry is generated for administrators. Given a user is deactivated in the IdP, When the next sync runs, Then all seats are reclaimed from that user within 5 minutes and the action is logged with correlation to the deactivation event.
IdP Presets with Custom Overrides & Precedence
Given an admin selects the Okta preset, When the mapping screen loads, Then default attribute/group mappings and normalization rules for Okta are pre-populated. Given a preset is selected, When the admin edits a specific mapping (e.g., adds "Team Lead" -> Lead role), Then the override is persisted and displayed as the effective mapping. Given both preset and custom mappings produce a different outcome for the same input, When precedence is configured to Custom over Preset, Then the custom mapping is applied and the audit shows the applied precedence rule. Given the admin switches presets from Azure AD to OneLogin, When changes are reviewed in Dry Run mode, Then a diff report shows impacted users and counts before persisting. Given the admin clicks "Reset to Preset", When confirmed, Then all custom overrides are removed and the effective mapping reverts to the preset with an audit entry capturing the reset action.
Payload Validation & Graceful Degradation
Given a SCIM user payload missing department but containing other attributes, When validation runs, Then only the office mapping step is skipped, other mappings proceed, and a warning audit entry cites the missing field. Given a SCIM payload contains unexpected attribute types (e.g., manager is an array), When validation runs, Then the value is rejected, defaults are applied where configured, and processing continues without halting the user sync. Given attribute normalization rules (trim, lowercase, punctuation removal) are enabled, When a payload arrives with inconsistent casing/spacing, Then normalized values are used for mapping and the audit records that normalization occurred. Given a transient schema error occurs for a user, When retry policy is enabled, Then the mapping step is retried up to the configured limit with exponential backoff and is idempotent across retries. Given a payload contains an unsupported attribute, When validation runs, Then the attribute is ignored and an info audit entry is logged without affecting other mappings.
Mapping Audit Trail & Change History
Given any user mapping change occurs (role, manager, office, seat), When sync completes, Then an immutable audit record is created containing timestamp, actor=SCIM, user identifier, source attribute(s), original value, new value, applied rule, and correlationId. Given an admin opens the audit UI, When filters are applied by user, attribute, date range, and rule, Then matching entries are returned within 2 seconds and can be exported as CSV and JSON. Given a change was the result of a default or conflict-resolution rule, When viewing the audit entry, Then the entry explicitly states that a default was applied or which precedence rule resolved the conflict. Given 365 days have passed since an audit entry was created, When retention policy runs, Then the entry is archived or purged according to policy and this action is itself auditable. Given simultaneous mapping updates for the same user, When audit entries are written, Then ordering is preserved by correlationId and timestamp with millisecond precision, preventing ambiguity.
Admin Console & Sync Health Monitoring
"As an organization admin, I want a single console to configure SCIM and monitor sync health so that I can operate provisioning and troubleshoot issues without engineering support."
Description

Create an admin UI to configure SCIM connectivity (enable/disable AutoSync), generate and rotate SCIM bearer tokens, manage attribute/group mappings, and run test or dry-run syncs. Surface operational visibility: last sync times, recent change count, reconciliation outcomes, and per-event logs with error details and suggested fixes. Include manual retry controls and a safe backfill trigger. Restrict access to organization admins and provide copy-paste setup instructions for common IdPs to accelerate onboarding.

Acceptance Criteria
AutoSync Toggle Control
Given I am an organization admin on the Admin Console's Sync Settings page When I toggle AutoSync to ON and confirm Then the setting is persisted and remains ON after page refresh And a sync job is queued within 60 seconds and appears in the logs Given AutoSync is ON When I toggle AutoSync to OFF and confirm Then no new sync jobs are started after 5 seconds And in-flight jobs are allowed to complete And the UI displays "Disabled" status and the timestamp of disablement Given the AutoSync state changes Then an audit log entry records actor, action (enable/disable), timestamp, and IP address Given an API client queries the Admin Settings API Then the AutoSync status reflects the saved state
SCIM Bearer Token Generation & Rotation
Given I am an organization admin on the SCIM Tokens tab When I click Generate Token and set an optional expiration date Then a new bearer token value is displayed once and can be copied to clipboard And the token is stored hashed server-side and cannot be retrieved after I close the modal And an audit entry records token creation (without the secret) Given at least one token exists When I click Rotate Token Then the previous token is revoked immediately And a new token is generated and activated And calls using the revoked token are rejected within 60 seconds with 401 Unauthorized And the UI updates last-used timestamps and active status Given I click Revoke Token Then the token becomes invalid instantly And the SCIM endpoint rejects it on the next request
Attribute & Group Mapping Management
Given I am on the Attribute & Group Mappings tab When I add or edit a source attribute path for title, manager, office, and license_count Then the UI validates the path syntax and required mappings are marked complete or missing And duplicate or conflicting mappings are blocked with an inline error Given valid mappings are configured When I click Preview with a selected test user from the IdP Then the system shows the resolved attribute values and role/office membership that would be applied Given group-to-role mappings are defined When an IdP group appears in a user’s profile during a dry-run Then the preview shows the resulting TourEcho role and office assignment Given I save mapping changes Then the new configuration is versioned and an audit log entry records before/after values Given required mappings are incomplete Then AutoSync cannot be enabled and the UI explains what is missing
Dry-Run and Test Sync Execution
Given SCIM connectivity and mappings are valid When I run a Dry-Run Then no user records are created, updated, or deactivated in TourEcho And the report shows counts of creates, updates, deactivations, and unchanged And a per-user diff preview is available for at least the first 100 impacted users And I can download the dry-run report as CSV and JSON Given a dry-run encounters errors Then each error row includes user identifier, error code/message, and a suggested fix link Given I run a Test Connection Then the system validates token, base URL, and required SCIM endpoints and returns a pass/fail result with latency
Sync Health Dashboard & Logs
Given I open the Sync Health page Then I can see last successful sync timestamp, last attempt timestamp, sync mode (delta/full), and 24h change counts Given the event log is visible When I filter by outcome (success/failure), action (create/update/deactivate/reclaim-seat), time range, or user email Then only matching events are shown And results are sorted by newest first and support free-text search by email or user ID Given an event row is opened Then I can see request/response snippets (with secrets redacted), error codes, and a suggested remediation Given the dashboard is open Then metrics auto-refresh at least every 60 seconds without losing filters Given retention is enforced Then logs are retained for a minimum of 30 days and older entries are not shown
Manual Retry & Safe Backfill Controls
Given failed events are present When I select one or more failed events and click Retry Then retries are queued immediately And successful retries update the event status to Resolved and create a new log entry linking the attempts Given AutoSync is ON When I trigger a Safe Backfill Then a confirmation modal describes scope and impact and requires explicit confirmation And the backfill runs in controlled batches respecting API rate limits and IdP throttling And progress (processed/total, ETA) is displayed and can be canceled safely And no duplicate users or duplicate updates are created (idempotent operations) Given a backfill is already running Then the UI prevents starting another backfill and displays the current job’s progress
Access Control & IdP Setup Guides
Given I am not an organization admin When I attempt to access the Admin Console or SCIM settings endpoints Then I receive a 403 Forbidden with no sensitive configuration details leaked And an access-denied audit entry is recorded Given I am an organization admin When I open the Setup Guides Then I can choose Okta, Azure AD, or OneLogin And the guide includes tenant-specific SCIM base URL, required attributes, and a paste field for the bearer token And Copy to Clipboard buttons copy exact values and show a success toast Given I click Test Connection from a setup guide Then the system validates the provided values and shows pass/fail with remediation steps on failure
SCIM Security & Access Controls
"As a security lead, I want strong authentication, authorization, and throttling on SCIM endpoints so that only trusted IdPs can provision accounts and the service remains resilient and compliant."
Description

Harden SCIM integration with scoped bearer tokens (least privilege), token rotation, optional IP allowlisting, request validation, and strict schema enforcement. Implement rate limiting and burst protection to mitigate abusive or misconfigured IdP traffic, and fail closed when authorization or schema checks fail. Encrypt data in transit and at rest, minimize PII exposure in logs, and capture security-relevant events for audit. Provide detailed integration documentation and security runbooks to satisfy enterprise due diligence and compliance requirements (e.g., SOC 2).

Acceptance Criteria
Scoped Bearer Token Enforcement
- Given a SCIM request with a missing or expired bearer token, When it reaches any SCIM endpoint, Then the API returns 401 with WWW-Authenticate: Bearer and no side effects occur. - Given a bearer token without the required scim:provision scope, When it attempts POST/PATCH/DELETE, Then the API returns 403 and denies the operation. - Given a bearer token scoped to Tenant A, When it targets resources of Tenant B, Then the API returns 403 and performs no reads or writes. - Given a bearer token with read-only scope, When it performs GET on /Users or /Groups, Then the API returns 200; When it performs POST/PATCH/DELETE, Then the API returns 403. - Given any authentication failure, When the API responds, Then the response body conforms to SCIM 2.0 Error format and secrets/tokens are never logged in app or audit logs.
Token Rotation and Revocation
- Given an admin issues a new SCIM access token, When rotation is confirmed, Then the new token is usable immediately and the previous token remains valid for a grace period of up to 15 minutes (configurable). - Given the previous token is explicitly revoked, When a request uses it, Then the API returns 401 within 60 seconds of revocation and logs a security event. - Given rotation occurs, When both old and new tokens are presented during the grace period, Then at most two active tokens per tenant are accepted; any older tokens return 401. - Given a token reaches its configured expiry, When it is used, Then the API returns 401 and no changes are persisted. - Given any token operation (issue, rotate, revoke), When it completes, Then an audit event is recorded containing tenantId, hashed tokenId, actor, timestamp, and outcome.
SCIM Endpoint IP Allowlisting
- Given IP allowlisting is enabled for a tenant, When a request originates from a non-allowed IPv4/IPv6 address, Then the API returns 403 and performs no side effects. - Given IP allowlisting is enabled, When a request originates from an allowed CIDR, Then the API processes it normally. - Given an admin updates the allowlist, When changes are saved, Then enforcement reflects the new CIDRs within 60 seconds. - Given a blocked request, When the response is returned, Then a security event is logged with source IP, tenantId, endpoint, and reason=ip_not_allowed. - Given large enterprises, When configuring the allowlist, Then the system supports at least 50 distinct CIDRs per tenant.
Strict Schema Validation and Fail-Closed Behavior
- Given a SCIM request with Content-Type not equal to application/scim+json, When it is submitted, Then the API returns 415 and does not process the payload. - Given a POST /Users payload missing required attributes or containing unknown attributes, When validated, Then the API returns 400 with SCIM error details and no user is created. - Given a PATCH operation with unsupported op or path, When processed, Then the API returns 400 and makes no partial updates. - Given any authorization or schema validation failure, When handling the request, Then the system fails closed with zero side effects on storage or downstream actions. - Given oversized payloads (>1 MB, configurable), When received, Then the API returns 413 and logs a validation error event.
Rate Limiting and Burst Protection
- Given the default per-tenant limit is 300 requests/min and 60 requests/10s burst (both configurable), When a client exceeds either threshold, Then excess requests receive 429 with Retry-After and are not executed. - Given rate limiting is enforced, When returning 429, Then the response includes X-RateLimit-Limit, X-RateLimit-Remaining, and X-RateLimit-Reset headers. - Given sustained abusive traffic, When throttling occurs, Then a security event is emitted (max once per minute per tenant) with counts and reason=rate_limited. - Given normal traffic within limits, When requests are sent, Then no 429 responses are returned and processing latency is unaffected beyond configured queueing. - Given rate-limit settings are changed, When applied, Then the new limits take effect within 60 seconds.
Transport and At-Rest Encryption
- Given any connection to SCIM endpoints, When TLS version < 1.2 is negotiated, Then the handshake fails and the connection is refused. - Given HTTPS requests, When responses are returned, Then HSTS header (max-age >= 31536000; includeSubDomains) is present and no insecure redirects are used. - Given an HTTP (plaintext) request to the API, When it arrives, Then it is rejected (no payload processing) and guidance to use HTTPS is returned. - Given production data storage for SCIM resources, When inspected, Then encryption at rest with KMS-managed keys is enabled and key rotation occurs at least annually, with rotation events logged. - Given backups of SCIM data, When created/restored, Then they remain encrypted at rest and access is audited.
Security Logging, PII Redaction, and Auditability
- Given a SCIM request or response, When logging occurs, Then bearer tokens, userName, emails, phone numbers, names, and addresses are never logged; any incidental PII is redacted (e.g., ****) before storage. - Given security-relevant events (auth failures, schema validation failures, rate-limit triggers, token rotations/revocations, IP blocks), When they occur, Then an immutable audit event is written with tenantId, correlationId, timestamp (UTC ISO 8601 ms), endpoint, outcome, and reason code. - Given the audit log, When queried via admin API or exported to SIEM, Then events are available within 60 seconds of occurrence and retained for at least 1 year. - Given logging levels, When debug logging is enabled by an admin, Then PII redaction remains enforced and debug is disabled by default. - Given any audit retrieval attempt that tries to modify or delete events, When executed, Then the API returns 405 and no changes are made.

JIT Join

Just‑in‑time user creation on first SSO with domain claim rules and optional invite approval. Perfect for pilots or smaller offices without SCIM—new teammates sign in and are instantly placed into default roles from Role Blueprints, slashing friction without sacrificing control.

Requirements

Verified Domain Claim
"As a broker-owner or IT admin, I want to verify ownership of my office's email domain so that only legitimate teammates can be auto-created on first SSO."
Description

Provide a secure mechanism for organizations to claim and verify ownership of their email domains before enabling JIT Join. Support multiple domains per org and verification via DNS TXT record or admin email challenge, with real-time status checks and re-verification prompts on changes. Enforce that JIT provisioning is only allowed for verified, non-public domains, with configurable allow/deny lists. Include revocation flows that immediately block new JIT sign-ins on revoked domains and surface clear error messaging to end users. Expose domain management via Admin UI and API, and persist audit entries for claim, verify, and revoke actions.

Acceptance Criteria
Verify Domain via DNS TXT Record (Admin UI)
Given an authorized org admin adds "example.com" in Admin UI and selects "DNS TXT" verification When the system generates a unique token and displays the TXT record name/value and a "Check Now" action Then the domain status is set to "Pending Verification" and the exact TXT record instructions are visible When DNS for example.com resolves with the provided TXT value and the admin clicks "Check Now" Then the status transitions to "Verified" within 60 seconds, the method is recorded as "DNS TXT", and JIT eligibility for the domain is set to true And if the TXT is missing or mismatched, the status remains "Pending Verification" and the UI shows the last-checked timestamp and error code in {TXT_NOT_FOUND, TXT_MISMATCH} with retry allowed
Verify Domain via Admin Email Challenge
Given an authorized org admin claims "example.com" and selects "Admin Email Challenge" When the admin requests a challenge code Then the system sends a one-time code to admin@example.com and displays a code entry form And the code validity is 15 minutes with a maximum of 5 attempts When the correct code is submitted within validity Then the domain status becomes "Verified", the method is "Email Challenge", and JIT eligibility is set to true When the code is incorrect, expired, or attempts exceeded Then verification fails with error CODE_INVALID_OR_EXPIRED and the status remains "Pending Verification"
Real-Time Status and Re-Verification on Verification Artifact Changes
Given "example.com" is in status "Verified" via DNS TXT When the TXT record is removed or no longer matches on a manual "Check Now" or automated check Then the status changes to "Needs Re-Verification", JIT eligibility is set to false, and an in-app banner plus email notification are sent to org admins within 10 minutes And the Admin UI shows a "Re-verify" action (DNS TXT or Email) and displays last successful verification and last check timestamps When verification is re-established via either method Then the status returns to "Verified" and JIT eligibility is restored
Enforce JIT Only for Verified, Non-Public Domains
Given JIT Join is enabled and the org has claimed domains When a user signs in via SSO with user@example.com and example.com is "Verified", not on the public email provider list, and not denied Then the user is JIT-provisioned, assigned default roles per the selected Role Blueprint, and sign-in succeeds When a user signs in with a domain that is Unverified (Pending, Needs Re-Verification, or Revoked) or a public/free-email domain Then JIT provisioning is blocked with HTTP 403 (API) and the end user sees: "Your organization has not enabled sign-in for this email domain. Please contact your administrator." And the event is recorded with reason in {DOMAIN_UNVERIFIED, DOMAIN_PUBLIC, DOMAIN_REVOKED}
Allow/Deny Lists Enforcement at JIT Sign-In
Given the org configures domain rules with allowlist and denylist entries (exact domains and wildcard subdomains like *.example.com) When a user attempts first SSO Then denylist rules take precedence over allowlist; if the email matches any denylist pattern, JIT is blocked with reason DOMAIN_DENIED And if the email domain matches an allowlist entry and the domain is "Verified", JIT proceeds And if allowlist is non-empty and no allowlist entry matches, JIT is blocked with reason DOMAIN_NOT_ALLOWED And rule matching is case-insensitive and completes in under 50 ms per attempt
Revocation Immediately Blocks New JIT Sign-Ins with User Messaging
Given example.com is "Verified" and JIT Join is accepting new users When an authorized admin revokes the domain via Admin UI or API Then the domain status becomes "Revoked", JIT eligibility is set to false, and new JIT sign-ins using @example.com fail globally within 60 seconds And existing provisioned users from @example.com remain able to sign in per account policy And end users attempting new JIT sign-in see: "Sign-up is disabled for this email domain. Please contact your administrator." and the API returns HTTP 403 with reason DOMAIN_REVOKED
Domain Management via Admin UI and API (CRUD, Multi-Domain, and Audit)
Given authorized admins access Domain Management via Admin UI or API When they create, list, update, verify, and revoke domains for their org Then the API exposes POST /orgs/{orgId}/domains, GET /orgs/{orgId}/domains, GET /orgs/{orgId}/domains/{domain}, PATCH /orgs/{orgId}/domains/{domain}, DELETE /orgs/{orgId}/domains/{domain} with RBAC and returns statuses in {201,200,204,400,401,403,404,409} And the system supports multiple domains per org with statuses in {Pending Verification, Verified, Needs Re-Verification, Revoked} And a domain cannot be claimed by more than one org; attempts to claim an already-claimed domain return HTTP 409 with error DOMAIN_ALREADY_CLAIMED And every claim, verify, and revoke action persists an audit entry (actor, channel UI/API, method DNS TXT/Email, timestamp, IP, reason) retrievable via GET /orgs/{orgId}/domains/{domain}/audit
First-SSO JIT Provisioning
"As a new agent at a participating office, I want to sign in with my work SSO and be ready to use TourEcho immediately so that I can schedule showings without waiting for IT."
Description

Automatically create and activate a user account on first successful SSO if the email domain matches a verified domain and policy allows. Support Google and Microsoft OIDC natively (with SAML for enterprise), hydrating profile fields from IdP claims (name, email, avatar, department, groups) and assigning the user to the correct organization. Ensure idempotent, transactional creation with duplicate handling: merge with existing pending invites, reactivate deactivated accounts per policy, and reject conflicts with clear guidance. Respect per-domain approval settings (auto-activate vs. pending approval). Provide resilient error handling, retries for transient IdP failures, and comprehensive event logging.

Acceptance Criteria
Auto-Provision on First SSO with Verified Domain (Auto-Activate)
- Given a user signs in via Google or Microsoft OIDC and the email domain is verified and set to Auto-Activate, when IdP authentication succeeds, then create a new user record transactionally with status Active within 2 seconds of IdP callback. - Given IdP provides claims (name, email, avatar, department, groups), when provisioning completes, then profile fields are hydrated and persisted; missing optional claims default to null and do not block activation. - Given the user’s email domain maps to an organization via domain claim rules, when provisioning, then assign the user to that organization and grant default roles from Role Blueprints. - Given an Active user already exists with the same email, when first SSO is attempted, then do not create a duplicate; instead sign in to the existing account and log an idempotent event. - Given event logging is enabled, when provisioning succeeds, then emit auth.sso.success, user.provisioned, and role.assigned events with a shared correlation ID.
Domain Requires Approval — Pending Activation
- Given a verified domain has Approval setting = Pending Approval, when first SSO succeeds, then create a user with status Pending Approval and deny access to app features until approved. - Given a user is Pending Approval, when an org admin approves in the admin console, then activate the account and assign default roles; when rejected, then delete the pending user and send an email with guidance. - Given policy restricts sign-in for pending users, when the pending user attempts app access, then show an Awaiting Admin Approval screen and return HTTP 403 for protected APIs. - Given event logging, when a user is created as pending, then emit user.provisioned_pending and admin.approval_required events with org and domain context.
Merge with Existing Pending Invite
- Given an invite exists for the same email with status Invited or Pending, when the user completes first SSO, then merge the invite with the SSO identity, preserving inviter metadata, and activate per domain policy. - Given activation policy is Auto-Activate, when the merge completes, then mark the invite Accepted and set user status Active; if policy is Pending Approval, then set user status Pending Approval and mark invite Pending Decision. - Given simultaneous invite acceptance link click and SSO callback, when processed, then produce a single user record and a single Accepted invite without duplicate audit entries. - Given an email alias is used and the IdP provides alternate email claims, when alias matching is enabled and the alias matches allowed rules, then merge accounts; otherwise reject with a clear conflict message and guidance.
Reactivation and Conflict Handling
- Given a deactivated user with matching email attempts SSO and policy allows reactivation, when IdP authentication succeeds, then reactivate the existing account and update profile claims without creating a new account. - Given policy disallows reactivation, when a deactivated user attempts SSO, then block sign-in with message Account deactivated — contact admin and emit user.reactivation_blocked. - Given an active user exists in a different organization due to distinct domain mapping, when SSO occurs, then reject provisioning with a clear cross-org conflict message and no data leakage. - Given a user from an unverified or disallowed domain attempts SSO, when evaluated, then reject provisioning with Domain not allowed and provide a link to request access.
Claim Mapping and Role Assignment
- Given IdP returns a groups claim, when claim-to-role rules are evaluated, then assign roles from Role Blueprints that match; if none match, assign only default roles. - Given required claims (sub, email, email_verified) are missing or invalid, when SSO occurs, then abort provisioning, show Required claims missing, and emit auth.sso.claims_invalid with details. - Given an avatar URL is provided, when provisioning, then enqueue avatar fetch and store asynchronously; failure to fetch must not block activation or sign-in. - Given a department claim is provided, when processed, then persist it in the user profile and include it in the audit log entry for provisioning.
Idempotency and Concurrency Safety
- Given multiple SSO callbacks are received within 5 seconds for the same IdP sub/email, when processing, then create at most one user and return success to duplicates referencing the same user ID via an idempotency key. - Given two parallel provisioning requests for the same identity, when transactions complete, then one succeeds and the other detects the existing record and reuses it without partial or inconsistent state. - Given a transient error occurs after user creation but before role assignment, when the operation is retried, then roles are assigned without creating another user and the final state is consistent. - Given the IdP later corrects the sub-to-email linkage, when a mismatch is detected, then halt provisioning, flag a security alert, and require admin intervention to reconcile.
Resilience, Retry, and Protocol Support (OIDC/SAML)
- Given IdP network timeouts or 5xx responses occur, when applying retry policy, then retry up to 3 times with exponential backoff; on final failure, present a non-destructive error and emit auth.sso.transient_failure. - Given OIDC tokens from Google or Microsoft are received, when validating, then verify signature via JWKS, issuer, audience, nonce, and exp with up to 60 seconds clock skew tolerance. - Given SAML is configured for an enterprise tenant, when a SAML assertion is received, then validate signature, audience, ACS URL, and map attributes into the same claim pipeline as OIDC. - Given any flow completes (success or failure), when logging, then write a comprehensive audit trail including timestamp, tenant/org ID, correlation ID, user identifier (or temporary), outcome, and error details if applicable.
Role Blueprint Auto-Assignment
"As an office admin, I want new teammates to inherit the correct roles automatically based on our Role Blueprints so that access is consistent and setup effort is minimized."
Description

Assign newly provisioned users to default roles and permissions using Role Blueprints configured per organization and per domain. Allow optional mappings from IdP attributes (e.g., groups, department, title) to specific Role Blueprints with precedence rules and a safe fallback. Prevent privilege escalation by validating mapped roles against org policy. Provide a dry-run preview during configuration and an optional sync mode that updates roles on subsequent logins when IdP attributes change. Log all role assignment actions for auditability.

Acceptance Criteria
Default Blueprint on First SSO (Org and Domain Overrides)
Given an organization has a global default Role Blueprint "Agent (Org Default)" And the organization has a claimed domain "example.com" with a domain default Role Blueprint "Leasing Agent (Domain Default)" And a new, unrecognized user john@example.com completes SSO and JIT user creation is enabled When the user account is provisioned Then the user is assigned the "Leasing Agent (Domain Default)" Role Blueprint And the user’s effective permissions exactly match those defined in that Role Blueprint And if the domain had no domain default configured, the user would be assigned the "Agent (Org Default)" Role Blueprint instead And an audit log entry is created capturing user_id, org_id, domain, assigned_blueprint, assignment_reason ("domain-default" or "org-default"), and timestamp
IdP Attribute Mapping with Precedence and Fallback
Given mapping rules are configured as follows with explicit priority order: (1) IdP group contains "Leasing" -> Role Blueprint "Leasing Agent"; (2) IdP department equals "Marketing" -> Role Blueprint "Marketing Analyst" And the organization fallback Role Blueprint is "Agent (Org Default)" And an inbound SSO assertion for a new user contains groups ["Leasing","Other"] and department "Marketing" When the user is provisioned via JIT Then the system evaluates rules in priority order and assigns Role Blueprint "Leasing Agent" And if no mapping rules match, the user is assigned the fallback "Agent (Org Default)" Role Blueprint And the audit log records evaluated_attributes (redacted), ordered_rule_ids, matched_rule_id, assigned_blueprint, fallback_used (true/false)
Policy-Based Privilege Escalation Guard
Given org policy disallows assignment of Role Blueprints exceeding permission tier "Manager" and blocks specific Blueprint "Owner" And a mapping rule targets the "Owner" Blueprint for the signing-in user When the user completes SSO and role resolution runs Then the system prevents assignment of the disallowed Blueprint And assigns the configured safe fallback Role Blueprint "Agent (Org Default)" instead And the audit log records policy_check = "violation", blocked_blueprint = "Owner", assigned_blueprint = "Agent (Org Default)", and timestamp And the user successfully signs in with only the fallback’s permissions And (if admin notifications are enabled) an alert is sent to org admins summarizing the violation
Dry-Run Preview of Role Resolution
Given an admin configures or edits Role Blueprints and IdP attribute mappings And selects "Run Dry-Run Preview" with a sample set of N users from the IdP (N configurable up to at least 500) When the dry-run executes Then no changes are persisted to any user accounts or permissions And the preview displays, per sampled user, the resolved Role Blueprint, matched rule (or fallback), and any policy violations detected And the preview shows aggregate counts per Blueprint and count of violations And the admin can download the preview results as CSV and JSON And the admin can save the configuration only after acknowledging any detected violations (acknowledgment recorded in audit log)
Sync Mode Updates on Subsequent Logins
Given sync mode is enabled for the organization And a user currently has Role Blueprint "Leasing Agent" And the user’s IdP attributes now match the mapping for Role Blueprint "Field Ops" When the user signs in subsequently via SSO Then the user’s Role Blueprint is updated to "Field Ops" prior to issuing the session’s authorization claims And permissions removed by the change are revoked immediately and no later than 60 seconds after login And the audit log records before_blueprint = "Leasing Agent", after_blueprint = "Field Ops", matched_rule_id, and timestamp And if sync mode is disabled, the user’s Role Blueprint remains unchanged and the audit log records reason = "sync-disabled"
Comprehensive Audit Logging for Role Assignments
Given any role assignment evaluation occurs (initial assignment, fallback due to no match, reassignment via sync, or policy block) When the evaluation completes Then an immutable audit record is written with fields: event_id, timestamp (UTC), user_id, org_id, domain, sso_transaction_id, evaluated_attributes (PII-redacted), rules_considered, matched_rule_id (nullable), policy_check_result, assigned_blueprint (nullable if blocked), previous_blueprint (if applicable), actor = "system" or admin_user_id, request_ip (if available) And audit records are queryable by time range, user, and event type within the UI and API And audit records are exportable as CSV and JSON And PII in evaluated_attributes is redacted according to org policy and compliance settings
Optional Invite Approval Gate
"As a security-conscious broker-owner, I want to review and approve first-time SSO joins so that I retain control over who enters our workspace."
Description

Offer an org-level policy that routes first-time SSO users into a pending state requiring manual approval before activation. Allow administrators to define approver roles or specific approvers, and send actionable notifications with one-click approve/deny. Include SLA reminders, auto-expiration for stale requests, and templated decline reasons. Provide clear end-user messaging during pending/denied states and ensure that approvals immediately activate the account and apply Role Blueprint assignments. Persist full approval decision history for compliance.

Acceptance Criteria
Org Policy Toggle and Approver Configuration
Given I am an Org Admin and the Optional Invite Approval Gate is disabled, When I enable it and save, Then all subsequent first-time SSO sign-ins in this org require approval. Given the policy is enabled, When I assign approvers by role and/or specific users, Then only members matching those assignments are eligible to approve. Given no approver role or user is configured, When the policy is enabled, Then the system defaults approver eligibility to Org Owner and Org Admin roles and displays a configuration warning. Given multiple approver roles and users are configured, When changes are saved, Then the effective approver list updates within 60 seconds and is used for new requests. Given default Role Blueprint assignment is configured, When a user is approved, Then those roles are applied to the user profile.
First-Time SSO Pending State and End-User Messaging
Given a user with a claimed email domain signs in via SSO for the first time and the approval gate is enabled, When identity is verified, Then the user account is created in Pending Approval state and access to application features is blocked except the pending screen. Given a user is in Pending Approval, When they attempt to access authenticated endpoints, Then the API returns 403 Pending Approval and the web app redirects to the pending page. Given a user is in Pending Approval, When they view the pending page, Then it displays the org name, approver contact (if available), expected SLA window, and a support link. Given a user is Denied, When they attempt to sign in, Then a denied message is shown including the selected decline reason and a support link, and no authenticated session is established.
One-Click Approve/Deny Notification Flow
Given a pending request exists, When approver eligibility is determined, Then eligible approvers receive notifications within 2 minutes containing one-click Approve and Deny actions. Given an approver clicks Approve from the notification, When the link is valid and the request is still pending, Then the request is approved and the account is activated without requiring the approver to log in again. Given an approver clicks Deny from the notification, When a decline reason template is selected or a freeform reason is entered, Then the request is denied and the reason is recorded and sent to the requester. Given an approve/deny link is clicked after the request is resolved, When the system processes the click, Then the action is idempotent and a "Request already resolved" message is shown with no state change. Given a notification link is older than 7 days or its signature is invalid, When it is used, Then the system rejects it with 401 and logs the attempt with no state change.
SLA Reminders and Auto-Expiration
Given the SLA reminder interval is configured to X hours (default 48), When a request remains pending for X hours, Then reminders are sent to approvers and an informational reminder is sent to the requester. Given the maximum pending age is configured to Y days (default 7), When a request reaches Y days without decision, Then it auto-expires with status Denied (Expired) and both requester and approvers are notified. Given a request auto-expires, When an admin views it in the console, Then the resolution shows "Expired — SLA exceeded" and includes the timestamps of creation, reminders, and expiration. Given a request is approved or denied before a scheduled reminder, When the reminder time elapses, Then no reminder is sent.
Immediate Activation and Role Blueprint Assignment on Approval
Given a pending user is approved, When the approval is recorded, Then the user status transitions to Active within 5 seconds and the user gains access on next request without re-authentication if a session exists. Given approval completes, When role assignment runs, Then the user is assigned all roles defined by the configured Role Blueprint for JIT Join and duplicates are ignored. Given role assignment encounters an error, When approval would otherwise complete, Then the account remains in Pending Approval, an error is logged, and approvers are notified to retry. Given a user was previously Denied and later re-initiates SSO, When an approver approves the new request, Then the account is created or reactivated as Active and the Role Blueprint is applied.
Compliance Audit Trail of Approval Decisions
Given any approval decision is made (Approve, Deny, Auto-Expire), When the decision is saved, Then an immutable audit record is stored with requester, approver (or system), decision, reason, channel, timestamps, approver IP, and notification link identifier. Given an admin views the audit log, When filtering by user or date range, Then all relevant approval events are listed in chronological order and can be exported to CSV. Given an auto-expired decision, When viewing its audit entry, Then the SLA policy values in effect at the time (reminder interval and max pending age) are displayed. Given two approvers act concurrently, When the first decision is recorded, Then the second is recorded as a no-op in the audit trail without altering the final state.
JIT Policy & Domains Admin UI
"As an org admin, I want a simple UI to configure domains and JIT rules so that I can manage access without engineering help."
Description

Deliver an admin console to configure JIT Join policies and manage domain claims: add/verify domains, view verification status, enable/disable JIT globally or per domain, set default Role Blueprints, toggle and configure approval requirements, and map IdP attributes to roles. Include guardrails (e.g., warnings for broad grants), inline help, and a test tool that simulates a sign-in with a sample email to preview outcomes. Restrict access to org owners/admins via RBAC and expose equivalent API endpoints for automation.

Acceptance Criteria
Domain Claim and Verification Workflow
Given I am an org owner/admin on the JIT Policy & Domains page, When I add a domain "example.com", Then the domain is created with status "Unverified" and a unique verification token is displayed with a copy action. Given a domain is "Unverified", When the verification token is correctly detected by the system, Then the status updates to "Verified", the verification timestamp and verifier are recorded, and JIT controls for that domain become enabled. Given a domain is "Unverified", When an invalid or missing token is detected after a verification check, Then the status becomes "Verification Failed" with an actionable error and retry option. Given a domain "example.com" already exists, When I attempt to add it again, Then I am prevented and see a duplicate domain error. Given multiple domains exist, When I view the domain list, Then I can filter by status (Verified/Unverified/Failed) and search by domain string.
Global and Per-Domain JIT Toggles with Precedence
Given global JIT is Disabled, When a user from any domain attempts first SSO, Then JIT creation is blocked and the simulator and logs show "Blocked by Global JIT Off". Given global JIT is Enabled and a domain is Disabled, When a user from that domain attempts first SSO, Then JIT creation is blocked and UI indicates "Domain JIT Off". Given both global and the user's domain JIT are Enabled, When the user attempts first SSO, Then JIT evaluation proceeds to approval and role mapping rules. Given I toggle global or domain JIT settings, When I save, Then the change is persisted, appears in audit logs with actor, timestamp, old/new values, and is reflected in effective status badges on the domain list.
Default Role Blueprint Assignment & Guardrails
Given a Verified domain, When I select a Default Role Blueprint and save, Then new JIT-created users from that domain are assigned that blueprint unless superseded by an attribute mapping rule. Given a Default Role Blueprint with elevated permissions, When I attempt to apply it to all users of a domain, Then a high-risk warning is shown requiring explicit confirmation before saving. Given a Default Role Blueprint previously set, When I change it, Then the change applies only to future JIT users and the UI displays a non-retroactivity notice. Given a selected blueprint has been deleted or is inactive, When I save, Then validation prevents saving and prompts me to choose a valid blueprint. Given I activate inline help for Role Blueprint settings, When opened, Then help text explains defaults vs. attribute mapping precedence and links to documentation.
Approval Requirement Configuration and Flow
Given a Verified domain, When I enable "Require approval for new users" and assign at least one approver role/group, Then the setting saves and is reflected in the domain's effective policy. Given approval is required for a domain, When a new teammate from that domain first SSO's, Then the user record is created in "Pending Approval" state, access is denied, and approvers receive a notification. Given a user is in "Pending Approval", When an approver approves the user, Then the user can sign in successfully and receives the configured Role Blueprint; When denied, Then subsequent sign-ins remain blocked. Given I attempt to enable approval without specifying approvers, When I save, Then validation fails with a clear message.
IdP Attribute-to-Role Mapping
Given I open Attribute Mapping, When I add a rule that maps IdP claim "department" equals "Leasing" to Role Blueprint "Leasing Agent" and place it above the default, Then users with department "Leasing" receive "Leasing Agent" on first SSO. Given multiple mapping rules exist, When a user's attributes match more than one rule, Then the topmost matching rule applies and others are ignored. Given a user's attributes match no mapping rules, When they JIT join, Then the Default Role Blueprint for their domain is applied. Given I reference a non-existent IdP claim in a rule, When I save, Then validation warns and prevents saving until corrected. Given I reorder mapping rules, When I save, Then the new priority order persists and is used by the simulator and live JIT.
JIT Sign-in Outcome Test Simulator
Given global/domain JIT settings, approval requirements, and mapping rules are configured, When I enter a sample email and optional IdP attributes into the simulator and run it, Then the tool displays the evaluated domain, rule path, and final outcome: Blocked, Requires Approval, or Allowed with Role Blueprint. Given a domain is Unverified or JIT disabled, When I simulate with an email from that domain, Then the outcome is "Blocked" with the specific reason. Given simulation is executed, When it completes, Then no users, approvals, or audit entries are created and results can be copied to clipboard. Given I change settings (toggles, mappings, default) and re-run simulation, Then the output reflects the latest saved configuration.
Admin RBAC and API Parity
Given I am an org owner/admin, When I navigate to JIT Policy & Domains, Then I can view and modify settings; Given I am not, Then the page and API return 403 and the navigation item is hidden. Given authenticated API access with proper org scope, When I call endpoints to list/add/verify domains, get/update JIT policy, manage attribute mappings, and run simulation, Then responses return 200/201 on success with JSON equal to UI data structures; duplicates return 409; unauthenticated returns 401; insufficient role returns 403. Given I update settings via API, When I refresh the UI, Then the changes are reflected, and vice versa.
Provisioning Audit & Notifications
"As an admin, I want complete visibility into JIT joins and approvals so that I can troubleshoot issues and satisfy compliance requests."
Description

Create a centralized audit trail for all JIT-related events (request, created, approved, denied, failed) capturing actor, timestamps, IdP, IP, and policy decisions. Provide filters, CSV export, and webhooks for downstream systems. Send contextual notifications to admins (new pending request, approval outcomes, domain verification changes) and to end users (pending state, approval, denial with reason). Implement configurable retention policies and ensure PII minimization consistent with privacy requirements.

Acceptance Criteria
Audit Event Coverage & Field Capture
Given a JIT Join event of type {request_initiated,user_created,approval_requested,approval_granted,approval_denied,provisioning_failed,domain_verification_changed}, when the event completes, then an audit record is appended within 1 second including: event_id, tenant_id, org_id, event_type, occurred_at (ISO 8601 UTC), actor_type, actor_user_id (if applicable), idp_provider, source_ip (truncated per privacy policy), target_user_id (if applicable), target_email_domain, role_blueprint_id (if applicable), policy_id, policy_version, decision, rule_id, request_id, and outcome_reason (optional). Given an audit record is written, when retrieved via API and UI, then all required fields are present, correctly typed, and values match the source event. Given a sustained rate of 100 JIT events per second for a tenant, when events are ingested, then no events are dropped and ordering is stable by occurred_at then event_id.
PII Minimization & Sensitive Data Handling
Given an audit record is stored, then full email addresses, names, phone numbers, and IdP tokens are not persisted; target_email_domain is stored in clear, actor_email is stored only as a salted SHA-256 hash, and source_ip is stored anonymized (IPv4 /24, IPv6 /64). Given an admin enters a denial reason, when the reason is saved, then it is limited to 256 characters and redacts email-like and phone-like patterns before storage and display. Given audit data is accessed via UI, API, CSV export, or webhooks, then the same masked/anonymized values are consistently returned and no additional PII is exposed.
Retention Policy Enforcement
Given a tenant retention setting of N days (where N is between 30 and 730), when an audit record exceeds N days since occurred_at, then it is purged irreversibly within 24 hours by a scheduled job and a purge summary entry is recorded. Given a tenant changes the retention setting, when saved, then the new policy is applied to subsequent purge cycles and is visible in settings with the effective date. Given a purge executes, when tested against records near the boundary, then records older than N days are removed and newer records are retained; purged data does not appear in UI, API, CSV, or webhook replays.
Filtering, Sorting, and Pagination
Given filters for date range, event_type, decision, idp_provider, actor_type, target_email_domain, role_blueprint_id, policy_id, request_id, and outcome (success/fail), when applied in combination (AND), then only matching audit records are returned. Given no explicit sort is chosen, when results are fetched, then they are ordered by occurred_at descending; when the sort is toggled, then occurred_at ascending is applied. Given a dataset of ≥50,000 records, when paginated via cursor with page_size in {25,50,100}, then each page responds in ≤2 seconds at p95 and page boundaries do not repeat or skip records.
CSV Export of Audit Events
Given a filtered result set up to 100,000 rows, when the user requests Export CSV, then a UTF-8 CSV with header row is generated using RFC 4180 quoting and ISO 8601 UTC timestamps, containing the same columns as displayed and respecting all active filters. Given an export would exceed 100,000 rows, when requested, then a background job is queued and upon completion an email with a time-limited (24h) secure download link is sent to the requester; the file content matches the applied filters. Given masked/anonymized fields in the UI, when exported, then the CSV reflects the same masking/anonymization and excludes prohibited PII.
Webhook Delivery for JIT Events
Given a tenant-configured webhook endpoint URL and shared secret, when a JIT event occurs, then a POST is sent within 5 seconds with a JSON payload including: event_id, event_type, occurred_at (UTC), tenant_id, request_id, decision, policy_id, policy_version, rule_id, idp_provider, source_ip (anonymized), target_user_id (if applicable), target_email_domain, role_blueprint_id (if applicable), and outcome_reason (optional), and is signed with HMAC-SHA256 in header X-TourEcho-Signature plus a timestamp header. Given delivery receives a 2xx response, then no retries occur; given a network error or 5xx/429, then retries use exponential backoff for up to 24 hours with at-least-once delivery; given a 4xx (except 429), then no retries occur. Given duplicate deliveries, when the receiver checks event_id, then duplicates are detectable (idempotent).
Admin and End-User Email Notifications
Given a new JIT join request for a claimed domain that requires approval, when the request is created, then all org admins receive a single consolidated email within 60 seconds containing requester email domain, IdP, approve/deny links, and the request_id; duplicate requests within 10 minutes are batched into one notification. Given an approval or denial action, when the decision is recorded, then admins receive an outcome email within 60 seconds; the end user receives an email: on approval (with sign-in link) or on denial (including the denial reason and support contact); on pending (if approval required) the end user is notified of the pending state. Given a domain verification status change (verified→unverified or unverified→verified), when the change occurs, then org admins receive an email within 60 seconds including prior status, new status, actor, and timestamp.

Group Mirror

Map IdP groups to TourEcho teams, offices, and listing scopes. When someone moves groups, their access, notification settings, and assignment queues update automatically. Ensures clean separation between offices and eliminates drift from ad‑hoc, manual changes.

Requirements

SCIM IdP Connectors
"As an IT admin, I want to connect our identity provider to TourEcho so that group changes are mirrored automatically without manual user management."
Description

Provide native connectors for common identity providers (Okta, Azure AD, and Google Workspace) using SCIM 2.0 and/or directory APIs to ingest users and group memberships into TourEcho. Support secure OAuth/service account authentication with least-privilege scopes, inbound event/webhook handling where available, and scheduled polling fallbacks. Enable tenant-level configuration, connection health checks, and test-sync capabilities to ensure near real-time reflection of group changes in TourEcho.

Acceptance Criteria
Okta SCIM 2.0 Provisioning — Receive Users and Groups
Given an Okta SCIM 2.0 app is configured with the tenant’s SCIM base URL and bearer token, When Okta performs an initial full push, Then TourEcho creates/updates all assigned users (externalId, email, givenName, familyName, active) and groups (displayName, members) and completes within 30 minutes for ≤10k users/≤1k groups. Given a user is deactivated in Okta, When Okta sends a PATCH to the SCIM /Users endpoint, Then the TourEcho user is marked inactive within 60 seconds and removed from all mapped groups in TourEcho. Rule: SCIM endpoints implement RFC 7644 pagination and filtering (startIndex, count, filter) and return appropriate status codes (200/201/204 success; 400/401/403/409/429/5xx errors) with problem details.
Azure AD SCIM 2.0 Provisioning — Delta and Rename Handling
Given Azure AD Enterprise App SCIM provisioning is enabled for the tenant, When a delta cycle runs, Then only changed users/groups are updated and no unchanged records are written (idempotent upserts confirmed by zero-diff logs). Given a user is soft-deleted in Azure AD, When Azure sends the deprovisioning call, Then the TourEcho user is set inactive within 60 seconds and loses all group-derived access. Given a group is renamed in Azure AD, When the next delta occurs, Then the TourEcho group displayName updates without dropping existing memberships.
Google Workspace Directory API — Read-Only Import via Service Account
Given a service account with domain‑wide delegation and scopes admin.directory.user.readonly and admin.directory.group.readonly is configured, When a scheduled sync runs every 15 minutes, Then all active users and groups are ingested and suspended users are marked inactive in TourEcho. Given a user has multiple email aliases, When the sync runs, Then TourEcho stores the primary email and retains aliases without creating duplicates. Rule: API queries respect OU/group filters; rate limits are honored with exponential backoff and no data loss.
Event/Webhook Handling with Polling Fallback
Given the IdP supports change notifications (e.g., Okta Event Hooks, Microsoft Graph change notifications, Google Pub/Sub), When a user or group membership changes, Then TourEcho reflects the change within 120 seconds of event receipt. Given events fail or are unavailable, When the connector detects 3 consecutive failures, Then it automatically falls back to polling every 15 minutes until events recover, with no duplicate updates (idempotent processing). Rule: Event delivery is verified with signed callbacks/validation tokens; retries use exponential backoff up to 6 attempts.
OAuth and Secrets — Least‑Privilege and Security
Rule: Only read‑only scopes necessary to read users and groups are requested; configuration validation fails if required scopes are missing. Rule: Access/refresh tokens and service account keys are stored encrypted at rest, rotated on demand, masked in UI/logs, and never exported via API. Given a token is revoked/expired, When the next sync or health probe runs, Then the connection status turns Red with a specific remediation error and no partial writes occur.
Tenant Configuration and Connection Health Checks
Given a tenant admin creates an IdP connection via UI/API, When Save is clicked, Then a live connectivity and scope probe runs and the connection is marked Green only if credentials, scopes, and reachability are valid; otherwise Amber/Red with error codes. Rule: Tenants can configure multiple isolated connections (e.g., Okta + Google) with independent scope filters; cross‑tenant data access is impossible (verified by multi‑tenant isolation tests). Given a connection is active, Then the health panel shows last sync time, last change applied, records scanned/changed, 7‑day error rate, and last error; these metrics are retrievable via API.
Test Sync (Dry‑Run) and Preview
Given an admin clicks Test Sync, When a dry‑run executes, Then a report is generated with counts to create/update/deactivate by entity type and top 50 diffs, with zero changes persisted to TourEcho. Rule: Dry‑run respects all configured filters and mapping rules and produces results consistent (±1%) with the subsequent real sync for the same snapshot. Given a dry‑run completes, Then the admin can download the report as CSV/JSON and a shareable link remains valid for 24 hours.
Mapping Rules Engine
"As a brokerage admin, I want configurable rules that map IdP groups to offices, teams, and listing scopes so that access is consistently derived from our directory."
Description

Implement a declarative rules engine to map IdP groups to TourEcho entities: offices, teams, roles, and listing visibility scopes. Support many-to-many mappings, pattern-based group matching (e.g., name prefixes), attribute-based filters, and default fallbacks. Allow versioned rule sets with preview and rollback, ensuring clean separation between offices and consistent permission derivation from IdP-managed sources.

Acceptance Criteria
Many-to-Many Group-to-Entity Mapping
Given an active rule set that maps IdP group "nyc-agents" => Office: NYC; Teams: [Uptown, Downtown]; Roles: [Agent]; ListingScopes: [Manhattan] And maps IdP group "vip" => Roles: [SeniorAgent]; ListingScopes: [Citywide] When a user is a member of ["nyc-agents","vip"] Then the engine assigns Office: {NYC}; Teams: {Uptown, Downtown}; Roles: {Agent, SeniorAgent}; ListingScopes: {Manhattan, Citywide} And the result contains no duplicates and is deterministic across repeated evaluations
Pattern-Based Group Matching with Token Capture
Given a rule with pattern "office-{city}" that maps to Office: {city} And IdP groups ["office-denver","office-austin","marketing"] When evaluation runs Then users in "office-denver" map to Office: Denver and users in "office-austin" map to Office: Austin And "marketing" does not match and receives no office from this rule And pattern matching is case-insensitive unless caseSensitive=true is set on the rule
Attribute-Based Rule Filters on User Claims
Given a rule requiring user.attribute.employment_status == "active" and user.attribute.department in ["Sales","Leasing"] And the rule maps Roles: [Agent] When a user meets both attribute conditions and belongs to the target IdP group Then the mapping applies and assigns Role: Agent When a user fails either attribute condition Then the mapping does not apply
Default Fallback Applied When No Rules Match
Given an active default rule with outputs: Offices: [], Teams: [], Roles: [], ListingScopes: [], Access: Denied When a user matches no non-default rules Then the engine assigns exactly the default outputs And the evaluation audit record flags default_applied=true
Versioned Rule Sets: Preview, Activate, and Rollback
Given rule set v1 is active and v2 is drafted When a preview is run for a selected cohort of users (N >= 100) Then the system returns a per-user diff (added/removed/unchanged assignments) without persisting changes When v2 is activated Then all impacted users have assignments recalculated within 2 minutes (p95) and an activation audit event is recorded When a rollback to v1 is initiated Then assignments revert to v1 within 2 minutes (p95) and a rollback audit event with actor and timestamp is recorded
Conflict Resolution and Cross-Office Separation
Given a user matches rules that would assign Offices {A,B} with different rule.priority values When evaluation runs Then the engine assigns exactly one Office based on highest rule.priority and drops lower-priority Offices And no Teams or ListingScopes from Office B are applied to a user in Office A And the conflict resolution decision is recorded in the audit with the winning rule id
Deterministic, Idempotent Evaluation with Audit Logging
Given the same rule set version and identical user inputs (group ids and attributes) When evaluation is executed multiple times Then outputs are identical and no duplicate assignments are created And an audit record is stored with: rule_set_version, input group ids, input user attributes hash, output assignments hash, evaluator id, and timestamp
Auto-Provisioning & Deprovisioning
"As an operations manager, I want user access and routing to update immediately when someone changes groups so that offices stay isolated and assignments stay accurate."
Description

Automatically create, update, and deactivate TourEcho user accounts, office/team memberships, roles, notification settings, and assignment queues based on IdP group deltas. Ensure idempotent, transactional updates with rollback on failure and a propagation latency target of under five minutes. Prevent cross-office leakage by enforcing scope boundaries at update time.

Acceptance Criteria
Provision new user from mapped IdP group
Given an active IdP user with a valid email is added to a mapped group for Office A When the Group Mirror processes the delta Then a TourEcho user account is created if none exists with profile fields (name, email, externalId) from IdP And the user is added to Office A scope and default teams per mapping And the user's role(s), notification settings, and assignment queues are set per mapping And the operation completes with end-to-end propagation latency ≤ 5 minutes from IdP change time And no access is granted to any office/team/listing outside Office A
Atomic office move via group change
Given a mapped user currently assigned to Office A and not to Office B When the user is removed from Office A mapped groups and added to Office B mapped groups within the same IdP change-set Then the user's memberships, scopes, roles, notification settings, and assignment queues are atomically switched to Office B And all Office A access, notifications, and queue routes are removed in the same transaction And there is no interval where the user has access to both Office A and Office B And the propagation latency from IdP change to final state is ≤ 5 minutes
Deprovision on removal from all mapped groups
Given a mapped TourEcho user When the user is removed from all TourEcho-mapped IdP groups or marked inactive in IdP Then the TourEcho account is set to inactive and sign-in is blocked And all office/team memberships, roles, notification settings, and assignment queue memberships are removed And no new assignments are routed to the user after deactivation And the change propagates end-to-end within ≤ 5 minutes And if the user is later re-added to mapped groups, a subsequent sync reactivates the account and restores memberships per current mappings
Idempotency and no-op behavior
Given a mapped user whose TourEcho state already matches current IdP groups When the same IdP delta is replayed or a sync runs with no effective changes Then no duplicate users, memberships, roles, notification settings, or queue assignments are created And no updates are written (record version/updatedAt remain unchanged) And the operation returns success with a no-op result
Transactional rollback on partial failure
Given a group change requiring multiple updates (user, memberships, roles, notifications, queues) When any step of the update fails Then all changes from this operation are rolled back, restoring the prior consistent state And the user's access and routing remain exactly as before the attempted change And the failure is logged with correlationId and marked for retry with exponential backoff And upon successful retry, the full change set is applied exactly once without duplicates
Enforce cross-office scope boundaries
Given office memberships are mutually exclusive by configuration When a user is simultaneously present in groups mapped to two different offices Then the system rejects or resolves the conflict per precedence rules and preserves a single office scope And the user cannot view, receive notifications for, or be assigned listings outside the resolved office scope And the conflict is written to the audit log with user, conflicting groups, decision, and timestamp And no data from the excluded office is exposed at any time
Propagation latency and observability
Given normal operating conditions When 100 consecutive add, update, or remove events are processed from IdP Then the 95th percentile end-to-end latency from IdP change timestamp to reflected state in TourEcho is ≤ 5 minutes And each event emits an audit log with actor, change-set, result (success/failure), startedAt, completedAt, and correlationId And latency and outcome metrics are exposed for monitoring and alerting
Multi-Group Conflict Resolution
"As a security-conscious admin, I want clear precedence rules for users in multiple groups so that access and routing remain predictable and least-privileged."
Description

Define deterministic precedence rules when a user belongs to multiple IdP groups that map to different offices, teams, roles, or listing scopes. Support admin-configurable priority ordering, per-dimension tie-breakers, and safe defaults (e.g., most restrictive scope). Apply the same precedence to notification templates and assignment queues to avoid ambiguous routing.

Acceptance Criteria
Deterministic Resolution Using Admin Priority and Tie-Breakers
Given admin configures group priority: G1 > G2 > G3 And per-dimension tie-breakers: Office = Highest Priority, Team = Highest Priority, Role = Least Privilege And mappings: G1 -> Office: North, Team: Alpha, Role: Manager; G2 -> Office: South, Team: Beta, Role: Admin; G3 -> Office: East, Team: Gamma, Role: Agent When a user belongs to G1, G2, and G3 Then resolved Office is North And resolved Team is Alpha And resolved Role is Agent
Default Most-Restrictive Listing Scope
Given no tie-breaker is configured for Listing Scope And listing scopes are defined as sets of listing IDs per group And G1 grants listings {L1, L2, L3} And G2 grants listings {L2, L3, L4} When a user belongs to G1 and G2 Then the resolved Listing Scope equals the intersection {L2, L3} And if the intersection is empty, the resolved scope is No Access
Notification Template Precedence Mirrors Group Priority
Given admin configures group priority: G1 > G2 And G1 maps to Notification Template T_A And G2 maps to Notification Template T_B And per-dimension tie-breaker for Notifications is Highest Priority When a user belongs to G1 and G2 Then the selected Notification Template is T_A And all triggered notifications for the user use T_A
Assignment Queue Routing Is Deterministic
Given admin configures group priority: G1 > G2 And G1 maps to Assignment Queue Q_A And G2 maps to Assignment Queue Q_B And per-dimension tie-breaker for Assignment Queues is Highest Priority When a lead or showing request is routed for a user belonging to G1 and G2 Then exactly one queue is selected: Q_A And no duplicate or conflicting queue assignments occur
Fast Recalculation on Group Membership Change
Given a user currently belongs to G1 and resolved Office is North And an IdP update removes the user from G1 and adds them to G2 mapping to Office South When the change is received by TourEcho Then the resolved Office updates to South within 60 seconds And notification template and assignment queue selections re-evaluate using the new membership And access previously granted by G1 is revoked within the same SLA
Safe Defaults When Config Is Missing
Given no group priority is configured And no per-dimension tie-breakers are configured When a user belongs to multiple groups with conflicting mappings Then the system defaults to least privilege: Role = least privileged available, Listing Scope = intersection, Office/Team = none selected And notifications and assignments default to no send and no route respectively And an admin-visible warning is logged indicating defaults were applied
Drift Detection & Reconciliation
"As a compliance officer, I want drift alerts and reconciliation policies so that our TourEcho permissions cannot silently diverge from the IdP."
Description

Continuously detect and report manual changes in TourEcho that diverge from IdP-derived state. Provide policy options to block edits to IdP-managed fields, allow with warnings, or auto-revert during scheduled reconciliation. Generate actionable alerts for detected drift, including what changed, who changed it, and the recommended fix or auto-correct action.

Acceptance Criteria
Real-Time Drift Detection on IdP-Managed Memberships and Scopes
Given a principal whose office/team/listing scopes and related settings are derived from IdP groups When a manual change to any IdP-managed field (office, team membership, listing scope, notification settings, assignment queues) is saved via UI or API Then a drift item is created within 60 seconds and marked Open And the drift item contains: resourceType, resourceId, field, previousValue, newValue, actorId, actorType (user/api), channel (UI/API), timestamp (UTC ISO 8601), policyAtTimeOfChange, correlationId And the drift item is retrievable via Admin > Group Mirror > Drift and via GET /group-mirror/drift?status=open and filterable by resourceId and time range And no automatic correction is applied until the configured reconciliation schedule runs or a permitted manual admin action is taken
Policy Enforcement: Block Edits to IdP-Managed Fields
Given policy=Block for IdP-managed fields When an edit to an IdP-managed field is attempted in the UI Then the save is prevented and the user sees "Field managed by IdP. Edit blocked." And API requests receive HTTP 409 with errorCode=GM_BLOCKED_FIELD and no changes are persisted And no drift item is created; an audit event is recorded with outcome=blocked, attempted previous/new values, actorId, and channel And the rule applies consistently across UI, single-record API, and bulk API endpoints
Policy Enforcement: Allow with Warning on IdP-Managed Fields
Given policy=AllowWithWarning for IdP-managed fields When an edit to an IdP-managed field is saved via UI or API Then the change is persisted and a warning banner displays immediately: "This field is managed by your IdP and may be reverted during reconciliation." And a drift item is created within 60 seconds with status=Open and severity determined by impact (e.g., cross-office access=Critical) And alerts are sent within 2 minutes to configured channels with recommendedFix (e.g., "Update IdP group membership for <principal>" or "Revert to IdP value") And the affected record shows a visible Drift badge in list and detail views until resolved
Scheduled Reconciliation: Auto-Revert to IdP State with Summary Reporting
Given policy=AutoRevert and a reconciliation schedule is configured When the scheduled job starts Then all Open drift items on auto-revert fields created before the start time are evaluated And for each item, the TourEcho value is set to the current IdP-derived value atomically per record, cascading updates to notification settings and assignment queues to match IdP And a per-item audit event records previousValue, newValue, resolver=system, and result (success/failure) with failureReason when applicable And upon completion, an admin-visible summary shows counts (evaluated, corrected, no-op, failed) and provides downloadable CSV/JSON of failures with recommended next steps
Drift Alerts: Payload Completeness, Delivery, and De-duplication
Given a drift item is created When alerts are emitted Then messages are delivered to each configured channel (email, Slack, webhook) once per drift eventId within 2 minutes And each alert includes: eventId, resourceType, resourceId, field, previousValue, newValue, actor, policy, detectedAt, severity, recommendedFix, and deep links to view item and run reconciliation (if permitted) And duplicate alerts for the same eventId are suppressed across channels for 24 hours unless the drift changes again (new revisionId) And webhook deliveries implement exponential backoff with retries for up to 24 hours; failed deliveries are surfaced in the Admin alert center and reconciliation summary
Auditability: Provenance, Search, and Export of Drift & Reconciliation Events
Given drift detection, policy enforcement, and reconciliation activities occur When an admin queries audit logs via UI or API Then events are searchable by time range, resourceId, actor, policy, outcome, and severity And each record is immutable, time-stamped (UTC), includes who/what/when/where (IP/userAgent or clientId), and is retained for at least 365 days And audit logs and drift items are exportable as CSV/JSON and exports match the UI for the same filters
Admin Setup & Preview UI
"As an admin, I want an intuitive setup and preview experience so that I can safely configure mappings and verify outcomes before impacting production users."
Description

Deliver an admin UI to configure IdP connections and mapping rules, with a guided setup wizard, credential validation, and least-privilege scope guidance. Include a dry-run preview that simulates the impact of rules on sample users before applying, plus controls to trigger manual re-sync, view last sync time, and inspect recent changes for troubleshooting.

Acceptance Criteria
Complete IdP Connection via Guided Setup Wizard
Given I am an Org Admin starting the Group Mirror setup wizard When I progress through each step Then the Next button remains disabled until all required fields on the step are valid And Back preserves previously entered values without loss And Save Draft persists the wizard state and can be resumed from the same step And Cancel prompts a confirmation and discards only unsaved changes And contextual least‑privilege scope guidance is displayed on the credentials step
Credential Validation and Secret Handling
Given I have entered IdP credentials and selected the IdP type When I click Validate Connection Then the system attempts a connection and returns a clear Success/Failure outcome within 10 seconds And on Success, Save becomes enabled and the connection status shows Connected And on Failure, an actionable error is displayed including the IdP error code or reason without exposing secrets And secrets are masked in the UI at all times and stored encrypted at rest And no credentials are persisted until a successful validation occurs in the current session
Define and Validate Group-to-Resource Mapping Rules
Given a connected IdP When I create mapping rules from IdP group identifiers to TourEcho teams, offices, and listing scopes Then each rule requires a unique name, a valid group identifier, and at least one target (team/office/scope) And the UI prevents overlapping rules for the same group+target combination or requires explicit priority to resolve conflicts And invalid or unknown group identifiers surface a validation error before save And a Test Match tool evaluates a provided user/email and shows the rule matched and the resulting access And saved rules are versioned with timestamp, editor, and change notes
Dry-Run Preview of Mapping Impact on Sample Users
Given I have saved mapping rules and a valid IdP connection When I run a Dry-Run on up to 100 sample users selected by search or CSV upload Then the preview shows, per user, current vs proposed teams, offices, listing scopes, notification settings, and assignment queues And shows adds, removals, and unchanged counts with totals at the top And indicates conflict resolutions (e.g., multiple matching rules) and which priority applied And no changes are applied during Dry-Run and Apply Changes remains disabled until I confirm And the preview can be exported to CSV including per-user deltas and the rule ids that caused them
Manual Re-sync Control and Last Sync Status
Given mapping rules are saved and the connection is valid When I click Run Re-sync Then a sync job starts, shows an in-UI progress indicator, and prevents re-triggering for 5 minutes or until completion And upon completion, Last Sync displays ISO 8601 timestamp, duration, outcome (Success/Partial/Failed), and counts of users added/removed/updated And on failure, a View Details link navigates to the audit view with the job id pre-filtered And the Run Re-sync control is disabled when the connection is not validated
Recent Changes Audit and Troubleshooting View
Given sync jobs or rule changes have occurred When I open Recent Changes Then I see a paginated list for the last 30 days with columns: timestamp, actor (job id/user), affected user, change type (add/remove/update), source rule id, and before/after access summary And I can filter by user, change type, rule id, and date range, with results updating in under 2 seconds for datasets under 10k rows And I can export the filtered results to CSV And PII fields are masked/unmasked according to an Admin-controlled toggle with default to masked
Admin-Only Access and Least-Privilege Scope Guidance
Given a non-admin user navigates to the Group Mirror Admin UI Then access is denied with a 403 message and no configuration data is returned by the API Given an Org Admin opens Scope Guidance When an IdP type is selected Then recommended minimal scopes/policies are displayed with copy-to-clipboard and downloadable JSON/YAML samples And if configured scopes exceed recommendations, a non-blocking warning is shown before save
Audit Logs & Sync Monitoring
"As a platform owner, I want detailed audits and monitoring so that I can troubleshoot sync issues and prove compliance during reviews."
Description

Record every sync event and resulting change with timestamps, source identifiers, before/after values, and actor attribution for compliance. Provide dashboards and webhooks for failure alerts, retry with exponential backoff, dead-letter queues for unrecoverable records, and exportable reports to CSV/JSON. Surface key metrics such as sync latency, error rates, and drift incidents.

Acceptance Criteria
Audit Log Completeness for Group Sync Events
Given an IdP group membership change is received When the Group Mirror applies the change Then exactly one audit log record is written capturing: event_id (UUID), occurred_at and processed_at (UTC ISO-8601), source_system, source_identifier, target_entity_type and id, operation (create|update|delete), before_values, after_values, actor_type and actor_id, outcome (success|failure), and correlation_id And the record is available via UI and API within 10 seconds of processing And the record is immutable and cannot be altered by any role And operation latency and outcome are linked to the record for metrics aggregation
Actor and Source Attribution in Audit Logs
Given a sync is triggered by an IdP system or by a human admin override When attribution is recorded Then actor_type reflects system|user and actor_id resolves to the IdP app client_id or user_id And source_system and source_identifier correspond to the upstream IdP and its event id And attempted writes without actor attribution are rejected and logged as failures with reason=missing_actor
Sync Monitoring Dashboard for Latency, Error Rate, and Drift
Given the last 60 minutes of sync activity When a user opens the Sync Monitoring dashboard Then the dashboard displays p50, p95, and p99 sync latency, rolling error rate (%), and drift incident counts by tenant, office, and team And charts auto-refresh at least every 60 seconds And clicking a metric bucket opens a list of constituent events with links to their audit records And drift incidents are defined as TourEcho state deviating from IdP-derived state for more than 60 seconds and are deduplicated per entity And thresholds for warning/critical states are configurable per tenant
Sync Retry with Exponential Backoff and Dead-Letter Queue
Given a transient error occurs during a sync operation When retries are executed Then the system retries with exponential backoff (1m, 2m, 4m, 8m, 16m) up to 5 attempts or until success And a permanent error class or max attempts exceeded routes the record to the Dead-Letter Queue with failure_reason and last_error_at recorded And DLQ items are visible in the dashboard, exportable, and support manual replay that creates a new audit record linked via parent_event_id And idempotency keys prevent duplicate application of the same upstream event across retries and replays
Failure Alert Webhooks Delivery and Security
Given a sync failure or DLQ placement occurs When alerting is configured for the tenant Then the system sends an HTTPS POST to registered webhook endpoints within 15 seconds containing event_type, tenant_id, entity_ref, failure_reason, severity, and correlation_id And requests are signed with HMAC-SHA256 using a tenant-specific secret and include a timestamp header to prevent replay for more than 5 minutes And delivery attempts, statuses, and latencies are recorded and viewable per endpoint And disabled or failing endpoints do not block core sync processing
Export Audit and Metrics Reports to CSV/JSON with Filters
Given a user selects a time range, scope, and filters (source_system, actor, outcome, operation) When they request an export Then the system produces downloadable CSV and NDJSON with a documented header/field schema and UTC timestamps And exports larger than 100,000 rows are delivered asynchronously with a signed URL that expires after 24 hours And exported row counts exactly match the number of records matching the filters And exports complete within 5 minutes for up to 1,000,000 rows or provide progress with chunked delivery

Audit Ledger

Tamper‑evident logs for sign‑ins, role grants, SCIM events, and admin actions with exports to SIEM/CSV and retention policies. Broker‑owners and Compliance Admins get clear, defensible audit trails and alerting on risky changes, helping pass audits and resolve disputes fast.

Requirements

Immutable Event Logging Core
"As a Compliance Admin, I want every security‑relevant action recorded immutably so that I can produce a defensible audit trail during audits and disputes."
Description

Implement an append-only, tamper‑evident event ledger that records sign‑ins, role grants/revocations, SCIM provisioning/deprovisioning, admin configuration changes, API token lifecycle, and export activity. Each entry includes tenant/org ID, actor ID and type (user, SCIM, API), target entity, event type and outcome, RFC3339 UTC timestamp, monotonic sequence, source IP, user agent, correlation/request ID, and structured before/after diffs where applicable. Events are hash‑chained and sealed using per‑tenant keys via KMS; writes are idempotent and durable with at‑least‑once guarantees. Enforce multi‑tenant isolation, backfill/replay tooling, NTP drift protections, and cursor/pagination APIs supporting up to 10k events/min per tenant.

Acceptance Criteria
Idempotent Append and Durability Guarantees
- Given an event with idempotency key X for tenant T, When the event is submitted multiple times concurrently or sequentially, Then exactly one ledger entry is stored and all duplicates are acknowledged without creating additional entries. - Given a write acknowledgement for event E, When the write node crashes and recovers, Then E remains present and readable in the ledger (durable at-least-once). - Given any attempt to update or delete an existing ledger entry, When invoked through internal APIs or direct storage paths, Then the operation is rejected and an auditable error is emitted.
Per‑Tenant Hash Chain and KMS Sealing
- Given events with sequences S..S+k for tenant T, When verifying the ledger, Then each record contains prev_hash and hash and the chain validates end-to-end for T. - Given a single-byte modification to any stored record, When verification runs, Then hash-chain validation fails and identifies the first corrupted record. - Given per-tenant KMS keys with recorded key_id/version, When keys rotate, Then new events are sealed with the new version and historical records remain verifiable with their recorded key_id/version. - Given an offline verifier and exported records, When verification is run, Then signatures/hashes validate without requiring access to live infrastructure.
Field Completeness and Structured Diffs
- For each event type (sign-in, role grant, role revoke, SCIM provision, SCIM deprovision, admin config change, API token create/revoke/rotate, export activity), Then the record includes: tenant_id, actor_id, actor_type ∈ {user, SCIM, API}, target_entity {type,id}, event_type, outcome ∈ {success, failure}, RFC3339 UTC timestamp, per-tenant monotonic sequence, source_ip, user_agent (if available), correlation_id, and before/after diff JSON for mutating actions. - Given missing required fields on ingestion, When validation runs, Then the event is rejected with a precise validation error and no ledger entry is created. - Given concurrent writers for tenant T, When N events are ingested, Then sequences are strictly increasing by 1 with no duplicates or reversals. - Given client-provided timestamps, When recorded, Then server assigns authoritative RFC3339 UTC timestamp; client_timestamp is stored separately without affecting ordering.
Multi‑Tenant Isolation and Access Control
- Given an API client scoped to tenant T, When listing or exporting events, Then only events where tenant_id==T are returned; cross-tenant queries return 403 and are logged. - Given a cursor issued for tenant T, When it is presented by tenant U, Then the request is rejected and no events are leaked. - Given internal backfill/replay jobs, When executed, Then they require explicit tenant scope and cannot read or write across tenants in a single run. - Given per-tenant sequence spaces, When sequences are inspected across tenants, Then values are independent with no collisions or interleaving.
High‑Throughput Ingestion and Cursor Pagination API
- Given sustained load of 10,000 events/min for a single tenant over 15 minutes, When measured, Then p95 write latency ≤ 250 ms and error rate < 0.1%. - Given the List Events API, When requesting pages of size up to 1,000, Then items are strictly ordered by sequence, pages are consistent, and an opaque cursor is returned for the next page. - Given concurrent writes during pagination, When the client paginates using returned cursors, Then no events are skipped or duplicated; cursors remain valid for ≥24 hours. - Given filters (time range, event_type, actor_type, outcome), When applied, Then results respect filters without breaking ordering guarantees. - Given backpressure, When rate limits are exceeded, Then the API returns 429 with retry-after and no partial pages.
Backfill and Replay Tooling
- Given a source event dump containing overlaps and duplicates, When backfill runs, Then exactly one ledger entry is created per unique idempotency key and original relative order is preserved within a tenant. - Given a dry-run flag, When executed, Then the tool outputs a summary (by event_type) of would-insert, would-skip, would-fail counts without writing to the ledger. - Given a failed run due to transient errors, When rerun, Then the tool resumes from the last committed cursor without duplicating entries. - Given completion, When the reconciliation report is generated, Then it lists inserted/skipped/failed counts and IDs and hash-chain verification status. - Given backfill for tenant T, When run, Then it cannot read or write other tenants’ data.
NTP Drift Protections and Clock Health
- Given NTP-synchronized write nodes, When measuring clock drift, Then drift remains ≤ 100 ms; if drift exceeds threshold, writes are refused and a health alert is emitted. - Given client timestamps that differ from server time by > 5 minutes, When recorded, Then the event is accepted with server timestamp, client_timestamp stored, and drift_flag=true for analytics/alerts. - Given a mixed-node cluster, When a node with unhealthy clock attempts to participate, Then it is quarantined from write quorum until healthy.
Risk Rule Engine & Alerting
"As a Broker‑Owner, I want real‑time alerts on risky changes so that I can intervene quickly and prevent unauthorized access or policy violations."
Description

Provide configurable rules to detect risky changes (e.g., owner role granted, MFA disabled, failed‑login spikes, SCIM deprovision failures, audit chain verification errors). Support severity levels, thresholds, time windows, suppression, deduplication, and routing to email, Slack/Teams, webhooks, and PagerDuty. Alerts include rich context with deep links to the Audit Explorer; P95 alert latency <60s. Maintain an alert audit log, acknowledgment workflow, per‑tenant rule templates, and maintenance windows.

Acceptance Criteria
Owner Role Granted Alert with Rich Context
Given tenant A enables rule "Owner Role Granted" with severity "Critical" and routing to Email and Slack When any user grants the Owner role to another user via UI or API Then exactly 1 alert is created for that event And the alert payload includes: alert_id, tenant_id, rule_id, rule_version, severity, occurred_at, actor_user_id, target_user_id, actor_ip, channel (UI|API|SCIM), correlation_id, and a deep_link to the Audit Explorer filtered by correlation_id And the alert is recorded in the alert audit log with status "Active"
Failed-Login Spike Rule with Thresholds, Time Window, and Dedup
Given rule "Failed-Login Spike" is configured with threshold=20, window=5m, scope=per user, severity=High, dedup_window=10m, suppression=10m When 19 failed login events for user U occur within 5m Then no alert is emitted When 25 failed login events for user U occur within 4m Then 1 alert is emitted and marked Active And no additional alerts for the same fingerprint (tenant+rule+user) are emitted during the 10m dedup/suppression window And after suppression expires, if the condition still holds, a new alert is emitted with occurrence_count incremented
Per-Tenant Rule Templates and Overrides
Given default rule templates exist for Owner Role Granted, MFA Disabled, Failed-Login Spike, SCIM Deprovision Failure, and Audit Chain Verification Error at template_version=v1 When tenant A creates a rule from template "MFA Disabled" Then a tenant-scoped rule is created referencing template_id and template_version And tenant can override severity, thresholds, time window, and routing without modifying the global template And updating the global template to v2 does not change tenant A's existing rule until tenant explicitly applies the latest version And the API/UI exposes a diff between the tenant rule and its template
Multi-Channel Routing: Email, Slack/Teams, Webhook, PagerDuty
Given tenant A has configured routes: Email(security@broker.com), Slack(webhook URL), Teams(connector URL), Webhook(https://example.com/hook), PagerDuty(routing_key) When an alert is created Then all enabled destinations receive a delivery attempt within 5s of alert creation And Email subject includes "[Tenant A][<Severity>][<Rule Name>]" and body includes the deep_link And Slack/Teams messages include rule name, severity, and deep_link And Webhook receives a POST with JSON {alert_id, tenant_id, rule_id, severity, occurred_at, deep_link} and gets a 2xx response And PagerDuty event is triggered with correct routing key and severity mapping And failures on any destination are retried with exponential backoff up to 3 times without blocking other destinations And each destination delivery outcome (success/failure/retry_count) is written to the alert audit log
Alert Latency SLO: P95 Under 60 Seconds
Given a load test generates 1000 alertable events evenly over 10 minutes across 10 tenants with Email and Webhook routing enabled When measuring end-to-end latency from event_ingested_at to first destination accepted_at Then the latency P95 is <= 60s And the test fails if P95 > 60s And latency metrics are exported with labels {tenant_id, rule_id}
Acknowledgment Workflow and Alert Audit Log
Given an alert is Active When a Compliance Admin acknowledges the alert via UI or API with assignee and note Then the alert status changes to Acknowledged and displays assignee and acked_at And the alert audit log records the acknowledgment with actor, timestamp, method, and note And duplicate alerts with the same fingerprint are suppressed while the alert remains Acknowledged When the alert is Unacknowledged Then the status change is recorded in the alert audit log and suppression is lifted
Maintenance Windows Suppression and Post-Window Handling
Given tenant A defines a maintenance window Saturdays 01:00-03:00 UTC with reason "IdP updates" When events matching any risk rule occur during the window Then no alerts are delivered to any destinations And each suppressed candidate is recorded with status "Suppressed by maintenance window" in the alert audit log When the maintenance window ends and the risky condition persists Then a single summary alert is emitted containing the number of suppressed candidates and a deep_link to the Audit Explorer pre-filtered to the window timeframe
SIEM & CSV Export Pipelines
"As a Security Engineer, I want to export normalized audit logs to our SIEM so that our central monitoring and incident response stay comprehensive and automated."
Description

Enable on‑demand and scheduled exports of audit events to CSV download and to external SIEMs (Splunk HEC, Datadog Logs, Azure Log Analytics, Google Chronicle, Sumo Logic) via HTTPS, Syslog over TLS, and cloud storage drops (S3/GCS/Azure Blob). Support incremental exports with resumable cursors, schema versioning, gzip compression, checksums, optional PGP encryption, exponential backoff/retries, and signed webhook deliveries. Provide per‑tenant field mapping/normalization, export health dashboards, and admin notifications on failures.

Acceptance Criteria
On‑Demand CSV Export with Compression & Integrity
Given an authorized Compliance Admin selects a date/time range, event types, and schema version V, and opts for gzip compression, SHA-256 checksum, and optional PGP encryption with a provided public key When they request an on-demand CSV export Then the generated file contains only events within the selected scope and tenant And columns and header exactly match schema version V with correct ordering and formats (timestamps ISO-8601 UTC, booleans true/false) And the file is CSV RFC4180 compliant with proper quoting and escaping And if gzip was selected the artifact is compressed (.csv.gz); otherwise .csv And a SHA-256 checksum file is produced and matches the artifact And if a PGP key was provided, the artifact is PGP-encrypted and decrypts successfully with the corresponding private key And the export action is recorded in the audit log with requester, scope, schema version, and artifact identifiers
Scheduled HTTPS SIEM Exports (Splunk/Datadog/Azure/Chronicle/Sumo)
Given a destination is configured with endpoint URL, credentials (e.g., HEC token, API key, workspace/shared key), TLS verification enabled, schedule (e.g., every 5 minutes), and incremental mode When the schedule runs Then only new events since the last successful cursor are transformed using the tenant’s field mappings and sent in batches no larger than 500 events or 5 MB, whichever comes first And each request is transmitted over TLS 1.2+ with required destination-specific auth headers; HTTP 2xx responses mark the batch as delivered per destination semantics And on 429/5xx/network errors, the export retries with exponential backoff up to 5 attempts and then surfaces a failure without advancing the cursor And on non-retryable 4xx errors (except 429), the destination is marked failed and alerts are generated And successfully delivered events are not duplicated across runs and the cursor advances exactly to the last acknowledged event
Syslog over TLS Export to External SIEM
Given a destination is configured with host, port, TLS CA/cert, protocol (TCP), and RFC5424 formatting with JSON payload normalization When an export run occurs Then messages are emitted as RFC5424 structured syslog over TLS 1.2+ with verified certificates And each message contains normalized fields per tenant mapping and includes schema version and event id for de-duplication And if the connection drops mid-run, the system reconnects with exponential backoff and resumes from the last unacknowledged cursor without losing or duplicating events
Cloud Storage Drop to S3/GCS/Azure Blob
Given a destination is configured with bucket/container, path template including {tenant_id}/{yyyy}/{MM}/{dd}/, and options for gzip, checksum, and optional PGP encryption When the scheduled export runs Then one or more files are written for the time window containing only the scoped events and conforming to the selected schema version And if gzip is enabled the objects use .csv.gz; otherwise .csv And a corresponding .sha256 checksum object is written and matches the content And if PGP is enabled, the objects are PGP-encrypted and decrypt successfully with the configured private key And on partial failures, reruns do not duplicate previously successful objects and resume from the last cursor
Incremental Exports with Resumable Cursors
Given exports are configured in incremental mode with per-destination cursors persisted When a run is interrupted before completion Then a subsequent run resumes from the last acknowledged cursor without gaps and without re-sending acknowledged events And operators can manually reset the cursor to a specific ISO-8601 timestamp T; the next run exports events with created_at >= T And cursor movements and resets are recorded in the audit log with actor, old value, new value, and reason
Per-Tenant Field Mapping & Normalization
Given a tenant has defined field mappings (renames, drops, constant injection, type coercions) associated to a target schema version When an export executes Then exported records strictly reflect the tenant mapping, with unmapped required fields populated per default rules and disallowed mappings rejected with clear validation errors And numeric, boolean, and timestamp fields are correctly typed/formatted; nullability rules are respected And schema version mismatches cause the export to fail fast with an actionable error that references the missing/extra fields
Export Health Dashboard & Failure Notifications
Given a broker-owner or compliance admin opens the Export Health dashboard Then they can see, per destination, last success time, current lag in minutes, last error message (if any), last run status, and retry counts for the past 24 hours And they can filter by destination type, status, and time range; access is restricted to authorized roles When a destination exceeds a lag threshold of 15 minutes or a job exhausts retries and fails Then an email and in-app notification is generated within 2 minutes including destination, error code/message, last cursor, and suggested remediation And a webhook notification is delivered to configured endpoints signed with HMAC-SHA256 with timestamp; on failure it retries with exponential backoff up to 5 times and is de-duplicated for identical incidents within 1 hour
Retention & Legal Hold Policies
"As a Compliance Admin, I want configurable retention and legal holds so that we meet regulatory obligations without retaining data longer than necessary."
Description

Allow broker‑owners and compliance admins to configure retention by event class with sensible defaults and enforced minimums. Implement scheduled purge jobs that respect legal holds and produce deletion receipts recorded in the ledger. Support organization‑wide legal holds with reasons and expiry. Surface storage usage and projected costs. Ensure policies honor privacy laws (e.g., GDPR/CCPA) via data minimization/redaction rules while preserving evidentiary integrity.

Acceptance Criteria
Configure Retention by Event Class
Given I am a Broker-Owner or Compliance Admin with org-level permissions When I open the Retention Policies settings Then I see all audit event classes with their default retention values and the enforced minimums displayed And if I enter a value below the enforced minimum, the save action is blocked and I see a validation message specifying the minimum And if I enter a valid value, the policy saves successfully (HTTP 200), a new policy version is created, and the change takes effect within 5 minutes And the change is recorded in the Audit Ledger with actor_id, org_id, event_class, old_value, new_value, reason (optional), timestamp, ip_address, and signature/hash
Scheduled Purge Jobs Respect Holds and Log Receipts
Given retention policy values exist and at least one item exceeds its retention, and at least one item is under a legal hold When the scheduled purge job runs at the configured window (default 02:00 org local time) Then items exceeding retention and not on legal hold are deleted from primary storage and any replicas/indexes And items on legal hold are skipped And a deletion receipt is appended to the Audit Ledger containing batch_id, event_classes, count_deleted, count_skipped_legal_hold, started_at, completed_at, actor=system, and a cryptographic digest of deleted record identifiers And the deletion receipt is exportable to CSV and forwarded to the configured SIEM within 5 minutes And job metrics (duration, success/failure) are visible in the admin UI and via API And failures are retried up to 3 times with exponential backoff and surfaced as alerts
Organization-wide Legal Hold with Reason and Expiry
Given I create a legal hold with a required reason and optional expiry date/time When I apply the hold at organization scope (all or specified event classes) Then all covered items are marked non-deletable regardless of retention And the hold is visible in the UI with id, reason, creator, scope, created_at, expires_at (nullable), and status (Active/Expired) And attempting to delete or purge a held item results in a logged skip with reason=legal_hold And upon hold expiry, the status changes to Expired and eligible items are deleted on the next purge run And creating, modifying, or releasing a hold writes an Audit Ledger entry and notifies Compliance Admins by email/in-app within 2 minutes And only Broker-Owners or Compliance Admins can create/modify/release holds; other roles receive HTTP 403
Storage Usage and Cost Projection Visibility
Given current ingestion rates and retention policies When a Compliance Admin views Storage & Costs Then storage used is displayed by event class and time window (7/30/90 days) with totals and last updated timestamp And projected storage and monthly cost for 30/90/365 days are calculated using current rates and configured retention And a what-if panel allows adjusting retention values and shows projected storage/cost deltas before saving And data can be exported to CSV; export includes org_id, event_class, bytes_used, projected_cost_30_90_365, generated_at And figures refresh at least hourly; if stale (>60 minutes), a warning badge is shown And access is restricted to Broker-Owner and Compliance Admin roles
Privacy Compliance via Data Minimization/Redaction
Given the org has GDPR/CCPA compliance enabled and PII fields are tagged in the schema When purge or export operations process audit records beyond their retention window Then raw PII fields are redacted/minimized per policy while preserving timestamps, event class, role, and cryptographic chain integrity And a Redaction event is written to the Audit Ledger with record_id, fields_redacted, method, timestamp, actor=system And exports contain no raw PII fields for records older than the configured minimization threshold And automated tests verify that all tagged PII fields are excluded from exports and that ledger hash continuity remains valid before and after redaction
Risky Policy Change Alerts
Given a retention policy change reduces retention by ≥30 days or removes a legal hold When the change is saved Then the user must enter a required justification (minimum 10 characters) And an alert is sent to Compliance Admins via email/in-app within 2 minutes containing actor, change details, justification, and a deep link And the change is flagged as Risky in the Audit Ledger And an API endpoint /alerts returns the alert within 2 minutes of the change
Admin Audit Explorer & Evidence Export
"As a Compliance Admin, I want to search and package audit evidence easily so that I can answer auditor requests in minutes instead of days."
Description

Deliver a web UI to search and review audit events with fast filters (date range, actor, event type, outcome, IP, listing/property, correlation ID). Provide column chooser, saved views, and side‑by‑side diffs for configuration changes. Offer timeline visualization and one‑click Evidence Package export (PDF + CSV + hash manifest) for a selected window, including chain proof and integrity attestations. Enforce RBAC so only Broker‑Owners and Compliance Admins can access PII‑bearing fields, and audit all access.

Acceptance Criteria
RBAC and PII Visibility Enforcement
Given a signed-in user without the Broker-Owner or Compliance Admin role When they open Admin Audit Explorer or query its API Then PII-bearing fields (email, phone, IP last octet, device fingerprints, notes) are redacted or omitted And any attempt to reveal PII (column chooser, API fields) is blocked with 403 and a denial event is logged Given a signed-in user with Broker-Owner or Compliance Admin role When they open Admin Audit Explorer or query its API Then PII-bearing fields are visible And a confidentiality banner is shown and acknowledged And all access is logged with actor, role, timestamp, filter set, fields selected, and record count Given any role When they attempt to access Admin Audit Explorer without authentication Then access is denied with 401 and an audit denial event is recorded
Multi-filter Search with Performance SLA
Given the audit store contains at least 5,000,000 events over the last 180 days When a user applies filters for date range, actor, event type, outcome, IP (supports CIDR and prefix match), listing/property ID, and correlation ID (combined with AND semantics) Then the first page of results returns within 2 seconds at p95 and 4 seconds at p99 And total matched count is displayed And correlation IDs in rows are clickable to auto-apply the correlation filter And clearing all filters resets the result set and performance remains within the SLA
Column Chooser and Saved Views Persistence
Given a user customizes visible columns (including reorder, show/hide) in the Audit Explorer table When they apply the column chooser Then the table updates immediately to reflect the selection And the selection persists for that user across sessions for the active saved view Given a user saves a view with a unique name, filters, sort, and column configuration When they load the saved view later Then the exact filters, sort, and columns are restored And they can update or delete the saved view And saving a view that includes PII columns requires an eligible role and records an audit event
Side-by-Side Diffs for Configuration Changes
Given a configuration-change audit event that contains before and after payloads When a user opens the event details Then a side-by-side diff is shown highlighting added, removed, and modified fields And large objects are collapsible and searchable within the diff And copy-to-clipboard for changed paths and values is available And fields designated as PII are masked according to RBAC rules in both panes
Timeline Visualization and Interaction
Given a result set produced by the current filters When the user opens the Timeline view Then a histogram of event counts is displayed with dynamic bucket sizes to keep <= 500 bars And brushing/zooming the timeline updates the table filters to the selected time window And panning/zooming interactions render within 300 ms at p95 for up to 1,000,000 matching events And hovering a bar shows the time range and count tooltips
One-Click Evidence Package Export with Integrity Attestations
Given a user has applied filters defining a time window and scope When they click Export Evidence Package Then a ZIP is generated within 60 seconds for up to 250,000 events and made available for download And the ZIP contains: (1) EvidenceSummary.pdf (filters, window, actor, counts, event-type breakdown), (2) Events.csv (raw events with selected and mandatory fields), (3) manifest.json (file names, sizes, SHA-256 hashes, total event count), (4) chain-proof.txt (ledger checkpoint/sequence, Merkle root or equivalent, signing certificate chain, timestamp authority attestation) And the UI displays the manifest SHA-256 of the ZIP and logs the export action And attempting export without eligible role for PII yields a redacted CSV/PDF and logs a PII-redaction notice
Access to Explorer and Exports is Fully Audited
Given any user views the Audit Explorer, changes filters, loads/saves views, or initiates an export When the action completes or fails Then an audit event is recorded including actor, role, IP, user agent, action type, parameters (filters, view name, selected columns), result counts or file hashes, and outcome (success/failure) with error code if applicable And these meta-events are queryable in the Explorer under event category "admin.audit.access"
Chain Integrity Verification & Tamper Detection
"As a Security Auditor, I want cryptographic proofs of log integrity so that I can verify no tampering occurred across the retention window."
Description

Continuously verify hash chains and sealed segments; expose verification status in UI and via API. Generate periodic Merkle roots per tenant and anchor proofs to an external timestamping service for independent verification. Detect and alert on gaps, out‑of‑order writes, or checksum mismatches; support re‑sealing after migrations/compaction. Manage cryptographic materials with cloud KMS, per‑tenant key separation, rotation schedules, and documented disaster‑recovery procedures validated quarterly.

Acceptance Criteria
Real-time Ledger Verification Status (UI & API)
Given a tenant ledger with new entries, When the verifier runs on a 60-second interval, Then hash continuity from genesis to current head and all sealed segments modified in the last 24 hours are validated and last_verified_at is updated within 120 seconds of the latest append. Given an authorized Broker-Owner or Compliance Admin, When they open Audit > Integrity, Then they see per-tenant status (Healthy/Degraded/Failed), head_height, last_verified_at, last_error_code (nullable), and last_anchor_id. Given an API client with a tenant-scoped token, When it GETs /api/v1/audit/integrity, Then it receives 200 with JSON including fields: status, head_height, last_verified_at, last_error_code, last_anchor_id; And the response is cacheable for up to 30 seconds. Given an API call without valid auth or wrong tenant, When it requests the endpoint, Then it receives 401 (unauthenticated) or 403 (forbidden) respectively. Given a verification failure, When the UI loads, Then the status is Failed and a human-readable cause and correlation_id are displayed and retrievable via API.
Hourly Merkle Root Generation and External Timestamp Anchoring
Given the top of each hour (UTC), When the interval closes, Then a Merkle root is computed for all tenant ledger entries with timestamps in that hour and persisted with root_id and root_hash. Given a valid root, When anchoring is attempted, Then a timestamping receipt (anchor_id, anchored_at, authority) is obtained from an external timestamping service within 15 minutes and stored. Given anchoring succeeds, When a user requests the proof via GET /api/v1/audit/anchors/{root_id}, Then the response includes root_hash, interval_start, interval_end, receipt, and a verification_instructions URI that enables independent validation against the authority. Given anchoring fails transiently, When retries occur, Then the system retries with exponential backoff for up to 24 hours and emits a Warning alert; If still failing after 24 hours, a Critical alert is emitted. Given an anchored root, When independently verified using the published receipt and authority, Then verification succeeds and matches the stored root_hash.
Tamper Detection for Gaps, Out-of-Order Writes, and Checksum Mismatches
Given a ledger segment with an injected gap, out-of-order index, or checksum mismatch, When the verifier scans, Then it flags the segment as suspect within 2 minutes and sets tenant integrity status to Failed. Given a tamper event is detected, When alerting triggers, Then a Critical alert is sent to configured channels (UI banner, email, webhook) and an AuditIncident record is created with type and first_detected_at. Given a tamper event is open, When additional writes are attempted to the affected segment, Then the system prevents sealing further blocks in that segment and routes new writes to a quarantine segment or pauses appends per policy, logging the action. Given a SIEM sink is configured, When a tamper event occurs, Then a normalized event with correlation_id and details is exported within 60 seconds.
Re-sealing After Log Migration/Compaction
Given a planned compaction that rewrites segments, When compaction completes, Then each new compacted segment is sealed with a seal that includes the prior segment’s terminal hash and a compaction_manifest_id. Given pre-compaction proofs, When validated post-compaction, Then they continue to verify against their original anchored Merkle roots (backward-compatibility preserved). Given the post-compaction chain, When end-to-end verification runs, Then continuity across pre- and post-compaction segments passes with no missing indices and matching aggregate checksums. Given API consumers, When they query segment metadata, Then they can retrieve compaction_manifest_id and prev_terminal_hash for auditability.
Per-Tenant KMS Key Management and Rotation
Given a new tenant is provisioned, When the first seal operation occurs, Then a dedicated KMS key (HSM-backed) is created or assigned with an IAM policy scoped to that tenant and key material never leaves KMS. Given routine key rotation scheduled every 90 days (or manual trigger), When rotation executes, Then subsequent seal operations use the new key version while historical segments remain verifiable with prior versions; rotation_start and rotation_end are logged. Given insufficient KMS permissions or key disablement, When a seal is attempted, Then the write fails closed, a Critical alert is emitted, and no unsealed segment is left in an indeterminate state. Given a Compliance Admin, When they request /api/v1/audit/keys/metadata, Then they can view per-tenant key_id (redacted), key_state, last_rotated_at, next_rotation_due, and recent key events without exposing private material.
Disaster Recovery and Quarterly Validation Evidence
Given a documented DR runbook, When the quarterly DR exercise is executed, Then the team restores the audit ledger and keys in a recovery environment and validates chain continuity end-to-end against the latest anchored roots. Given the DR test, When measured, Then RTO is ≤ 4 hours and RPO is ≤ 15 minutes for the audit ledger, and actuals are recorded. Given completion of the DR exercise, When evidence is published, Then a signed attestation, test logs, and verification report are stored and accessible in the UI and via /api/v1/audit/dr/evidence for Compliance Admins. Given failure to meet targets, When results are recorded, Then a Fail status is logged with corrective actions tracked and a follow-up test is scheduled within 30 days.

Breakglass Keys

Time‑boxed emergency access when the IdP is down. Requires approver sign‑off, auto‑expires, and logs every action for review. Keeps showings and seller visibility running during outages without weakening long‑term security posture.

Requirements

Dual-Approver Breakglass Workflow
"As a listing agent, I want to request emergency access that requires approver sign-off so that I can keep showings moving during an IdP outage without bypassing our controls."
Description

Provide an out-of-band emergency access request-and-approval flow that functions when the identity provider (IdP) is unavailable. A requester initiates a breakglass request specifying reason and desired duration; the system requires two distinct approvers (e.g., broker-owner and security admin) or a configurable approver quorum. Approvals can be granted via secure channels (email/SMS/voice with OTP) independent of IdP, and are time-bound with automatic expiration if not approved within a configured window. All approvals include captured metadata (approver identity, channel, time, IP/device), prevent self-approval, and block approval if the requester is an approver. The workflow integrates with TourEcho’s org/office hierarchy and respects tenant-level policies.

Acceptance Criteria
Initiate Breakglass Request During IdP Outage
Given the IdP is unavailable and the requester is an active user within a tenant When the requester submits a breakglass request with a reason and desired duration Then the system validates required fields, enforces tenant min/max duration, and creates the request in Pending status with an approval window timestamp And the request is scoped to the requester’s org/office per tenant policy And no access elevation is granted until the approver quorum is met When required fields are missing or duration exceeds policy limits Then the submission is rejected with actionable validation errors and no request is created
Dual-Approver Quorum Enforcement
Given the tenant policy requires a quorum of two approvers from eligible roles When two distinct approvers (not the requester) approve the request within the approval window Then the request transitions to Approved and breakglass access is activated When only one approval is recorded or duplicate approvals come from the same identity across any channel Then the request remains Pending and no access is granted When the tenant configures a quorum N of eligible approvers Then access is granted only after N distinct eligible approvers have approved, and attempts beyond N are recorded but not required
Out-of-Band Approval via Email/SMS/Voice with OTP
Given an eligible approver receives a notification via email, SMS, or voice independent of the IdP When the approver follows the link or IVR prompt and enters a valid OTP within the OTP validity window Then the approval is recorded with the verified channel, and the approver identity is bound to that channel When the OTP is invalid, expired, or retry limits are exceeded Then the approval attempt is rejected and logged, and the link/IVR session cannot be reused And approval links/tokens are single-use and expire per tenant policy
Time-Bound Access and Automatic Expiration
Given a request is Pending with an approval window When the approval window elapses without meeting the quorum Then the request auto-expires to Expired and cannot be approved thereafter Given a request is Approved with an access duration When the access duration elapses Then all breakglass tokens/sessions are revoked within the revocation SLA, and access is removed When a requester attempts to extend a live breakglass session Then extension is disallowed and requires a new request and approvals subject to policy limits
Approval Metadata Capture and Audit Logging
Given any action occurs on a breakglass request (creation, notification, approval, denial, expiration, revocation) When the action is processed Then the system records an immutable audit event with actor identity, role, channel, timestamp (UTC), IP, device/user-agent, request scope, and outcome And audit logs are queryable by tenant admins by time range, request ID, requester, and approver And logs are exportable in structured format and include tamper-evident checksums or signatures
Conflict-of-Interest and Policy Eligibility Checks
Given the requester belongs to an approver-eligible role When they attempt to approve their own request via any channel Then the system blocks the approval with a policy error and logs the attempt When an approver attempts to approve a request scoped outside their org/office eligibility Then the approval is rejected per tenant hierarchy policy And approver uniqueness is enforced across channels to prevent the same identity counting twice toward quorum
Least-Privilege Access Scope During Breakglass
Given a request is Approved When breakglass access is activated Then the requester receives only the minimum permissions defined by tenant policy to keep showings and seller visibility operational, scoped to the specified org/office And administrative and security configuration capabilities remain inaccessible And all UI/API responses include a breakglass context flag, and a visible UI banner indicates emergency access is active
Time-Boxed Ephemeral Sessions
"As a listing agent, I want a temporary session that auto-expires so that I can schedule showings and capture feedback safely during an outage."
Description

Issue device-bound, non-renewable emergency sessions with a configurable TTL (e.g., 30–180 minutes) that operate without live IdP calls. Sessions are represented by KMS-signed tokens (e.g., JWT) validated offline at the API gateway and stored server-side for revocation. Bind tokens to device fingerprint and originating IP range, display a countdown timer, and warn users before expiry. Prevent privilege escalation, disable refresh, and enforce automatic logout at expiration. Provide safe degradation for scheduling and feedback capture while preventing access to administrative endpoints.

Acceptance Criteria
Issue Ephemeral Session Within Configured TTL Without IdP
Given IdP is unreachable and a breakglass session issuance is requested with TTL X When X is between 30 and 180 minutes (inclusive) Then the session is issued with exp = now + X, a KMS-signed token (e.g., JWT) containing exp, jti, and binding claims And the API gateway validates signature and exp offline with zero IdP calls during validation (verified via telemetry) And if X is outside [30,180], issuance is rejected with 422 and error "ttl_out_of_range" And the server persists a revocation record keyed by jti
Enforce Device and IP Binding
Given a valid breakglass token bound to device fingerprint F and IP range R at issuance When requests originate with matching F and source IP within R Then requests are authorized When F does not match or source IP is outside R Then responses are 401 Unauthorized with reason "device_or_ip_mismatch" and the event is audited And attempts from a different browser profile or device result in the same denial
Non-Renewable Session and Automatic Logout on Expiry
Given an active breakglass session with expiry time T Then no refresh token is issued and any renewal/refresh request returns 400 with error "refresh_disabled" When server time reaches T Then subsequent API calls return 401 Unauthorized within 10 seconds and the client is logged out And the expired token cannot be used to obtain a new session
Safe Degradation: Permit Scheduling/Feedback; Block Administrative Endpoints
Given a user is authenticated via a breakglass session When calling scheduling endpoints (e.g., create/list/update/cancel showings) and feedback capture endpoints Then requests succeed and meet p95 latency ≤ 500 ms under normal load When calling administrative or privileged endpoints (e.g., user/role management, org settings, billing, exports, webhooks) Then responses are 403 Forbidden and no side effects occur And privilege elevation APIs are not discoverable to breakglass sessions (e.g., hidden from OpenAPI/links)
Server-Side Revocation Takes Effect Within 60 Seconds
Given a breakglass token with jti J is revoked server-side at time R Then within 60 seconds of R, all requests using J receive 401 Unauthorized with reason "revoked" And J cannot be reactivated; a new session must be issued And revocation persists across service restarts and cache flushes
User Countdown Timer and Pre-Expiry Warnings
Given an active breakglass session with TTL X Then the UI displays a persistent countdown of remaining time with accuracy ±5 seconds When remaining time reaches 5 minutes and 1 minute thresholds Then the user receives an in-app warning and the event is logged When the countdown reaches 0 Then the user is force-logged out and shown an expiration message with next-step guidance
Comprehensive Audit Logging for Breakglass Sessions
Given any breakglass session lifecycle event (issuance, API request, warning, revocation, expiration, logout) Then an immutable audit log entry is recorded within 60 seconds including timestamp, user ID, jti, device fingerprint hash, originating IP, endpoint, outcome, and reason And logs are queryable within 1 minute by jti and user ID and persist across service restarts And logs do not include token secrets beyond non-sensitive identifiers (e.g., jti)
Least-Privilege Breakglass Scopes
"As a security admin, I want to restrict breakglass access to essential scopes so that emergency use cannot be abused to perform high-risk actions."
Description

Define and enforce a minimal permission set available under breakglass, limited to core operational actions (e.g., create/modify showings, view showing calendar, capture QR feedback, view seller visibility). Explicitly exclude high-risk capabilities (role/admin changes, billing, data export, API keys). Policies are configurable per tenant and per office, mapped to existing TourEcho roles, and enforced consistently at the API and UI layers. Include scope-based feature flags to hide restricted UI and return permission-aware errors from APIs.

Acceptance Criteria
Core Actions Allowed, High-Risk Actions Denied Under Breakglass
Given an active breakglass session for user U with the tenant-approved breakglass scope When U performs core actions: create showing, modify showing, view showing calendar, submit QR feedback, and view seller visibility Then each allowed action succeeds with HTTP 2xx (API) and visible success state (UI), and no additional permissions are granted And when U attempts high-risk actions: role/admin changes, billing access, data export, API key generation, or organization settings Then related UI controls are not rendered and direct navigation shows a permission-aware screen And API calls to corresponding endpoints return 403 Forbidden with error="insufficient_scope" and include a request_id; no state changes occur
API-Level Enforcement and Error Contract for Breakglass Tokens
Given an access token marked breakglass=true carrying scopes [bg.showings.read, bg.showings.write, bg.feedback.write, bg.seller.read] When calling allowed endpoints (/showings, /calendar, /feedback, /seller-visibility) with the token Then responses return HTTP 2xx and actions complete as requested And when calling restricted endpoints (/admin/*, /billing/*, /exports/*, /api-keys/*, /roles/*) Then responses return HTTP 403 with JSON body containing fields: error="insufficient_scope", required_scope (string), and request_id (string) And response bodies do not leak restricted data and no side effects are applied
Scope-Based Feature Flags Hide and Disable Restricted UI
Given a user in a breakglass session When the web app renders the navigation and detail pages Then restricted modules (Admin, Billing, Data Export, API Keys) are not rendered in the DOM and are absent from navigation And attempting to deep-link to a restricted route displays a permission-aware message and does not render underlying components And restricted actions triggered via keyboard shortcuts or context menus are disabled and show a non-blocking permission tooltip
Tenant and Office Policy Configuration with Effective Scope Intersection
Given tenant T defines a baseline breakglass policy allowing [bg.showings.read, bg.showings.write, bg.feedback.write, bg.seller.read] And office O under tenant T defines an office policy allowing [bg.showings.read, bg.feedback.write, bg.seller.read] When an approver grants breakglass to a user in office O Then the user's effective scopes equal the intersection: [bg.showings.read, bg.feedback.write, bg.seller.read] And attempting an action that requires bg.showings.write results in HTTP 403 (API) and a permission-aware UI block (UI) And after the user's token is refreshed, any subsequent policy change is reflected in effective scopes for new requests
Mapping Breakglass Scopes to Existing TourEcho Roles
Given users A (Agent), M (Office Manager), and B (Broker Owner) in the same office with identical breakglass approval And tenant policy allows [bg.showings.read, bg.showings.write, bg.feedback.write, bg.seller.read] When breakglass is activated for A, M, and B Then each user's effective scopes equal (policy ∩ that role's normal capabilities), never exceeding the policy And A can modify showings only for listings permitted by the Agent role constraints; M and B cannot perform role/admin changes And any attempt by any role to use a capability outside the effective scopes results in HTTP 403 (API) and hidden/disabled UI (UI)
Consistent Enforcement Across Web and Mobile Clients
Given a user in a breakglass session using both web and mobile clients When attempting restricted actions (admin, billing, export, API keys) via deep links or UI controls Then both clients hide the controls by default and display a permission-aware message on access attempts And API calls originating from both clients for restricted actions return 403 with error="insufficient_scope" And allowed core actions behave identically and succeed on both clients
Immutable Breakglass Audit Trail
"As a compliance officer, I want an immutable log of all breakglass activity so that I can verify appropriate use and satisfy audit requirements."
Description

Capture a tamper-evident, end-to-end audit of breakglass usage: requests, approver decisions, session issuance, revocations, and every API/UI action taken during emergency sessions. Store logs in append-only, write-once storage (e.g., WORM/Object Lock) with hash-chaining and clock synchronization. Provide searchable dashboards, per-session summaries, and export to SIEM/webhooks with configurable redaction for PII. Generate a post-incident report including actors, timeline, actions taken, data touched, and policy variances to support compliance review.

Acceptance Criteria
Append-Only WORM Enforcement for Breakglass Logs
Given Object Lock Compliance mode is enabled on the log bucket and a retention policy of at least 365 days is configured When a log event is written Then the object is created with Object Lock retention and legal hold disabled, and the write includes event_hash and prev_event_hash metadata Given any principal attempts to modify or delete a log object before retention expiry When the operation is executed Then the storage API denies the request and the denial is recorded as a tamper_attempt event
Hash-Chaining and Clock Synchronization Integrity Checks
Given a contiguous sequence of log events linked by prev_event_hash When integrity verification is run Then recomputed hashes match stored hashes end-to-end and any mismatch fails verification and emits an integrity_alert event Given services are synchronized via NTP When timestamp drift is measured across components Then maximum drift between services is 2 seconds or less, and event timestamps are UTC and non-decreasing within a session
Complete Coverage of Breakglass Lifecycle Events
Given a breakglass session flows from request to approval/denial, issuance, usage, and revocation When the flow is executed end-to-end Then 100% of the following events are recorded with required fields: request_created, approval_granted/denied, session_issued, session_revoked, and every API/UI action during the session, each including actor_id, role, timestamp, ip, user_agent, action, resource, outcome, reason (where applicable), session_id, and correlation_id Given an event is missing a required field When validation runs Then the write is rejected and an error is logged without dropping previously accepted events
Searchable Dashboards and Per-Session Summary
Given 30 days of logs are indexed When an AuditViewer queries by session_id, actor_id, action type, or time range Then results return within 3 seconds for queries matching 10,000 events or fewer, and counts/facets match the underlying log store Given a session is selected When viewing its summary Then the view shows approvers, start/end/duration, integrity status, action count, top resources, anomalies, and export options (JSON, CSV, PDF), and no edit controls are present Given a user without AuditViewer role When accessing dashboards Then access is denied and the attempt is logged
Configurable PII Redaction on SIEM/Webhook Export
Given a redaction policy that masks [email, phone, name] and hashes [actor_id] When exporting events to SIEM and webhooks Then those fields are redacted or hashed per policy and non-PII fields remain unchanged Given network instability When delivery to a destination fails Then the system retries up to 5 times with exponential backoff and includes an idempotency key to prevent duplicates Given a webhook delivery When the receiver validates authenticity Then the payload includes an HMAC-SHA256 signature header derived from a shared secret
Post-Incident Report Generation and Integrity
Given a breakglass session ends or is revoked When report generation is triggered Then a report is produced within 5 minutes including actors, timeline, actions taken, data touched (resource identifiers), policy variances, and integrity verification result Given the report is generated When it is stored Then the report and its content hash are written to WORM storage and linked to the session_id, and the report is available for download as JSON and PDF
Real-Time Breakglass Alerts & Controls
"As an operations lead, I want real-time alerts and guardrails around breakglass usage so that I can detect and stop suspicious activity quickly."
Description

Send immediate notifications on request, approval, issuance, and revocation via email, SMS, and Slack. Provide an operations dashboard showing active sessions, countdowns, and requester/approver details. Enforce anomaly controls: per-tenant rate limiting, concurrent session caps, geo/IP restrictions, cooldown periods, and automatic escalation if thresholds are exceeded. Surface in-app banners indicating emergency mode and provide one-click terminate-all for designated responders.

Acceptance Criteria
Multi-channel Breakglass Notification Flow
Given a breakglass request is submitted, when the system records the request, then send email, SMS, and Slack notifications to designated responders within 15 seconds (95th percentile) including tenant, requester, reason, requested scope, requested duration, unique event ID, and dashboard link. Given a breakglass request is approved, when approval is recorded, then send email/SMS/Slack to requester and responders within 15 seconds including approver, effective start, expiry, and event ID. Given a breakglass session becomes active, when issuance occurs, then notify the security channel within 15 seconds with session details (tenant, requester, scope, expiry). Given a breakglass session is revoked or auto-expires, when revocation/expiry is recorded, then notify requester and responders within 15 seconds with reason and final timestamp. Given any notification attempt fails, when a channel returns an error, then retry up to 3 times with exponential backoff, log the error with code and correlation ID, and mark channel status in the audit log; after retries, create a 'Notification Degraded' event.
Operations Dashboard for Active Sessions
Given an authorized user (roles: Ops Viewer, Incident Responder, Security Admin) opens the dashboard, when data loads, then display active sessions with columns: tenant, requester, approver, start (UTC), expires at (UTC), remaining countdown (mm:ss), scope, status, escalation state, originating IP, and geo. Given the dashboard is open, when 5 seconds elapse, then refresh data without full page reload and update countdown timers every second. Given the user applies filters, when filter values change, then update results client-side within 500 ms and persist filters in the URL. Given a session row is clicked, when the user selects details, then open the session detail with full audit trail within 2 seconds. Given an unauthorized user attempts access, when the route is hit, then return 403 and do not leak any session metadata. Given visible rows exist, when the user clicks Export, then download a CSV of the filtered set within 3 seconds.
Per-Tenant Breakglass Rate Limiting
Rule: Default per-tenant limit is max 3 breakglass requests created within any rolling 60-minute window; limit is tenant-configurable between 1 and 10 via admin settings. Given a tenant exceeds its configured limit, when a new request is submitted, then reject with HTTP 429 BG_RATE_LIMIT_EXCEEDED, include remaining wait time, do not create a record, and write an anomaly log entry. Given a limit breach occurs, when the rejection is issued, then send an escalation notification to the tenant’s security channel within 15 seconds. Rule: Rate limiting metrics (accepted, rejected, p95 latencies) are emitted to monitoring with tenant and environment labels.
Concurrent Breakglass Session Cap
Rule: Default maximum concurrent active sessions per tenant is 2; cap is configurable between 1 and 5. Given a tenant is at its cap, when an approver attempts to approve or issue another session, then block with HTTP 409 BG_CONCURRENCY_CAP_REACHED, keep the request in Pending, and display which sessions count toward the cap. Given a cap block occurs, when the block is recorded, then send an escalation notification with active-session list and a terminate-all link to responders within 15 seconds. Given active sessions change, when one ends, then automatically reevaluate pending requests and notify requester that approval may proceed.
Geo/IP and Cooldown Enforcement
Rule: Tenant administrators can configure allowed countries and CIDR ranges for breakglass requests and usage; entries support temporary allows with TTL up to 60 minutes. Given a request or session activity originates from a disallowed geo or IP, when validation runs, then block the action with HTTP 403 BG_GEO_IP_BLOCKED, log source IP, geo, ASN, and trigger a security notification. Rule: After any breakglass session for a requester ends (revocation or expiry), a cooldown prevents that requester from creating a new request for 30 minutes; the window is configurable 5–60 minutes. Given a requester is in cooldown, when a new request is submitted, then reject with HTTP 429 BG_COOLDOWN_ACTIVE and display remaining cooldown time. Given a Security Admin overrides cooldown, when override is granted, then require justification text, log the override, and notify the security channel within 15 seconds.
In-App Emergency Mode Banner & Terminate-All Control
Given a tenant has at least one active breakglass session, when any authenticated user from that tenant loads the app, then display a persistent in-app banner within 3 seconds stating "Emergency access active", showing remaining time and a link to the dashboard; banner meets WCAG AA contrast and is not dismissible while sessions remain active. Given no active sessions exist, when the state changes to zero, then remove the banner within 3 seconds across the app. Given a user has Incident Responder or Security Admin role, when viewing the dashboard, then show a "Terminate all active sessions" control. Given the terminate-all control is used, when the user confirms with a reason, then revoke all active sessions for the tenant within 10 seconds (95th percentile), emit revocation notifications, and record per-session results in the audit log. Given any session fails to revoke, when retries are attempted, then retry up to 3 times and surface failures on the dashboard with remediation guidance.
Automatic Escalation on Anomaly Thresholds
Rule: Anomalies include rate-limit breaches, concurrency-cap blocks, geo/IP blocks, notification dead-letters, and cooldown overrides. Given an anomaly occurs, when it is detected, then create an escalation event with severity High, send Slack and PagerDuty notifications within 15 seconds including tenant, event type, counts, and dashboard links. Given an escalation is created, when responders view the dashboard, then display an "Escalated" badge on the tenant and require acknowledgment; record who acknowledged and when. Given an escalation is unacknowledged, when 5 minutes elapse, then auto-page the secondary on-call and elevate severity to Critical. Rule: All escalation lifecycle transitions are captured in the immutable audit log with timestamps, actor, and correlation IDs.
Automatic Revocation on IdP Recovery
"As a platform admin, I want emergency sessions to end automatically when IdP service returns so that we minimize exposure and return to standard controls quickly."
Description

Continuously probe IdP health and automatically revoke all active breakglass sessions once normal authentication is restored. Require fresh IdP sign-in on next action, terminate tokens server-side, and provide a controlled grace period to let in-flight writes complete safely. Include a manual killswitch for security admins, and backfill audit entries with revocation reasons and recovery timestamps.

Acceptance Criteria
IdP Recovery Detection and Debounce
Given the IdP is marked Unhealthy and health probes run every 5 seconds When the system receives 3 consecutive successful health checks (HTTP 2xx) with latency <= 500 ms Then the IdP state is set to Healthy and an idp_recovered event is emitted within 2 seconds of the third success And no recovery event is emitted if fewer than 3 consecutive successes occur
Automatic Revocation of Active Breakglass Sessions
Given one or more active breakglass sessions exist And an idp_recovered event is emitted When revocation is triggered Then all server-side breakglass access and refresh tokens are invalidated within 5 seconds And any subsequent API call using a revoked breakglass token returns HTTP 401 with error_code=breakglass_revoked and retry_hint=signin_with_idp
Forced IdP Re-authentication on Next Action
Given a user previously authenticated via breakglass And the IdP is marked Healthy When the user initiates their next action that requires authentication Then the user is redirected to the IdP sign-in flow and must successfully authenticate to proceed And upon successful sign-in, a new standard session is issued and the requested action completes And if IdP sign-in fails, no side effects from the requested action are committed
Controlled Grace Period for In‑flight Writes
Given one or more write operations authenticated via breakglass started before the idp_recovered event timestamp When revocation is triggered Then those in-flight write requests are allowed to complete for up to 30 seconds after the event And any new request started after the event using breakglass tokens is rejected with HTTP 401 breakglass_revoked And any in-flight operation exceeding the 30-second grace is aborted and fully rolled back
Security Admin Manual Revocation Killswitch
Given a user with the Security Admin role and MFA verified When the user activates the Breakglass Revocation killswitch in the admin console or via POST /admin/breakglass/revoke with a reason Then all active breakglass tokens are invalidated within 5 seconds And an audit entry is recorded with actor_id, reason, scope=all_breakglass, and timestamp And non-admin callers receive HTTP 403 when attempting to access the killswitch
Audit Backfill with Revocation Reason and Recovery Timestamp
Given active breakglass sessions existed within the last 24 hours When a revocation occurs with reason ∈ {idp_recovered, manual} Then an immutable audit entry is written per affected session within 10 seconds including session_id, user_id, revocation_reason, recovery_timestamp, initiator (system|actor_id), and token_terminated=true And an aggregate audit record summarizes total_revoked and outage_duration
Block New Breakglass Sessions After Recovery
Given the IdP is marked Healthy When a user attempts to request a new breakglass session Then the request is denied with HTTP 409 and error_code=breakglass_unavailable with instruction to use IdP sign-in And no breakglass token is issued
Breakglass Policy Console & Runbooks
"As a broker-owner, I want to configure breakglass policies and share clear runbooks with my team so that emergency access aligns with our risk tolerance and operating procedures."
Description

Provide an admin console to configure breakglass policies: approver quorum and roles, allowed channels, default TTL, permitted scopes, office-level overrides, allowed hours, geo restrictions, and alert thresholds. Include test-mode simulations and health checks to validate readiness without granting real access. Generate printable runbooks and request QR/call flows for field agents, with clear risk warnings and step-by-step procedures. Localize policy text and end-user prompts for supported regions.

Acceptance Criteria
Approver Quorum and Roles Configuration
Given I am an admin with policy edit rights When I configure an approver quorum N (N >= 1) and specify required approver roles and cross-role constraints Then the console validates inputs, prevents duplicates/empty values, and saves a versioned policy with editor, timestamp, and change summary Given a breakglass request is initiated under this policy When approvals are collected Then access is granted only if at least N approvals are received and role constraints are satisfied; otherwise the request is denied with a clear reason and audit entry Given I enter invalid values (e.g., N = 0, unknown role) When I attempt to save Then the save is blocked and inline validation messages explain each error
Default TTL, Permitted Scopes, and Auto-Expiry Enforcement
Given I set a default TTL and a maximum TTL cap, and define permitted access scopes When I save the policy Then the settings persist and are applied to all new breakglass requests Given a breakglass access is granted When the TTL elapses Then access auto-expires, all active sessions/tokens are revoked within 60 seconds, and an audit event is recorded Given a requester asks for a TTL above the cap or a scope not permitted When the request is submitted Then it is rejected with a specific error and no access is granted
Office-Level Overrides and Policy Inheritance
Given a global breakglass policy exists and I open an office policy When I view the office configuration Then the UI clearly indicates inherited values versus overridden values and shows the effective policy Given I create or update an office-level override (e.g., different quorum, hours) When I save Then the office’s effective policy updates immediately without affecting other offices, and the change is audit-logged Given I remove an office override When I save Then the office reverts to inherited global values and the effective policy reflects the change Given both global and office values exist for the same setting When determining enforcement Then the office override takes precedence over global and this precedence is explained in the UI help text
Allowed Hours and Geo Restriction Enforcement
Given I configure allowed hours (with explicit timezone) and a geofence (radius or polygon) for an office When I save the policy Then the settings are persisted and shown in the effective policy summary Given a breakglass request is initiated When the request occurs outside the allowed hours or the user’s location is outside the configured geofence Then the request is blocked with a clear denial reason and an audit event referencing the violated constraint Given a breakglass request is initiated around daylight saving transitions When evaluating allowed hours Then the office’s configured timezone is used and the outcome is consistent with wall-clock hours Given the requester’s location cannot be determined When geo restriction is required Then the request is denied by default with guidance to contact an approver via approved channels
Allowed Channels, Alert Thresholds, and Notification Routing
Given I enable or disable request and approval channels (e.g., QR web form, phone/IVR, SMS link, email) When I save the policy Then only enabled channels appear in runbooks and live flows; disabled channels are hidden and return a 403 with rationale if accessed directly Given alert thresholds (e.g., requests per time window, consecutive denials, pending approvals age) and destinations (e.g., email, webhook) are configured When thresholds are exceeded Then an alert is sent to each destination within 60 seconds including office, metric, threshold, and recent event IDs, and delivery status is recorded Given I click “Send test alert” for a destination When the test runs Then the destination receives a clearly marked test alert and the console reports success or a specific error
Test-Mode Simulations and Readiness Health Checks
Given I start a test-mode simulation for an office When the flow runs end-to-end Then no real access is granted, the UI is clearly labeled as Simulation, synthetic audit events are generated, and the simulation can be reset Given I run readiness health checks When checks execute Then the console validates approver contactability, notification endpoints, time sync, and geo services, showing pass/fail per check with remediation guidance Given one or more checks fail When viewing readiness Then the office readiness status is marked Not Ready and the failing checks are listed with timestamps and next steps
Printable Runbooks and Field Agent QR/Call Flows with Localization
Given I generate field runbooks for a selected office and locale When generation completes Then a downloadable, print-optimized PDF is produced containing risk warnings, step-by-step procedures, approver contact paths, enabled channels only, and policy metadata (policy ID/version, last updated), along with QR codes and call scripts Given a field agent scans the runbook QR or dials the call flow When the flow loads Then it is mobile-friendly, reflects the office’s effective policy (quorum, hours, geo, channels), displays clear risk warnings, and in simulation mode indicates no real access will be granted Given the office locale is set When viewing policy text, runbooks, and end-user prompts Then content is localized with correct date/time/number formats; where a translation is missing, English fallback is used and missing keys are reported in an admin diagnostics view

Elasticity Curve

Interactive graph that shows projected showings lift, days‑on‑market reduction, and offer likelihood at each 0.5–5% price change. Pin the “sweet spot” where impact peaks without over‑cutting, so you move with confidence instead of guesswork.

Requirements

Dynamic Price Range Controls
"As a listing agent, I want to adjust hypothetical price changes in 0.5–5% increments so that I can see how small cuts affect showings, DOM, and offer odds without manual recalculation."
Description

Provide UI controls to adjust hypothetical price changes from 0.5% to 5.0% in 0.5% increments against the current list price, with both slider and type-in support. Display both percentage and absolute dollar deltas, honoring local MLS rounding rules and price floors/thresholds. Validate inputs, guard against negative or out-of-range values, and default to the active list price pulled from the listing record. Ensure keyboard accessibility, responsive behavior on mobile, and analytics events for adjustments. Persist the last-used range per listing to streamline repeated analysis.

Acceptance Criteria
Slider Adjustments Within 0.5–5.0% Range
Given an active listing with list price P loaded from the listing record And the price-change slider is visible When the user drags the slider thumb Then the selected percentage is constrained to the inclusive range 0.5%–5.0% And it snaps in exact 0.5% increments on all input methods And values outside the range are clamped to the nearest bound And a formatted percentage label (e.g., “2.0%”) is displayed And a formatted absolute dollar delta based on P is displayed concurrently And negative values cannot be selected via the slider
Type-in Percentage and Dollar Inputs Sync and Validate
Given percent and dollar type-in fields are visible alongside the slider When the user enters a percentage value Then the dollar field updates in real time using the current list price and MLS rounding rules And the slider position updates to the nearest 0.5% increment And if the entered percent is <0.5 or >5.0, an inline error shows “Allowed range is 0.5%–5.0%” and the value is not accepted on blur When the user enters a dollar value Then the percent field updates in real time using MLS rounding rules And the slider position updates to the nearest 0.5% equivalent And non-numeric characters (except decimal and thousands separators) are rejected And negative or zero values are rejected with an inline error
MLS Rounding Rules and Price Floors/Thresholds Honored
Given the listing’s MLS configuration defines rounding/increment rules and any price floors or thresholds When a hypothetical price change is calculated Then the absolute dollar delta is rounded to the nearest permitted increment per MLS rules before display and use And the resulting hypothetical list price respects any MLS floors/thresholds; if not, it is adjusted to the nearest allowed price And the UI displays an unobtrusive note (e.g., “Adjusted per MLS rules”) when an adjustment is applied And an example test: Given P=$475,000 and 2.2% change with $100 increment rule, Then displayed delta rounds to $10,500 And unit tests cover at least three MLS rule patterns (fixed increment, tiered increment, floor threshold)
Default Base Price and Initial State
Given the control initializes for a listing Then it uses the active list price from the listing record as the calculation base And if no prior user selection is persisted for this listing, the initial selection is set to the minimum allowed change (0.5%) And the percent and dollar deltas display correctly from that base without requiring user interaction And if the listing’s active list price changes while the UI is open, the base updates and all derived values recalculate within 200ms
Keyboard Accessibility and Screen Reader Semantics
Given a keyboard-only user focuses the slider When ArrowLeft/ArrowRight are pressed Then the value decreases/increases by exactly 0.5% When PageUp/PageDown are pressed Then the value increases/decreases by exactly 1.0% When Home/End are pressed Then the value jumps to 0.5%/5.0% respectively And the slider exposes role=slider with aria-valuemin=0.5, aria-valuemax=5.0, and an aria-valuetext announcing both percent and dollar delta And focus order is logical and trap-free; labels are programmatically associated And the implementation meets WCAG 2.2 AA for the component
Mobile Responsive Behavior
Given a viewport width ≤ 414px When the control renders Then the slider and inputs fit within the container without horizontal scrolling And primary touch targets (slider thumb, increment controls) are ≥ 44×44 px And value changes via touch are precise to 0.5% increments And orientation changes preserve the current selection and displayed values And performance remains responsive (input-to-display latency ≤ 100ms)
Analytics and Persistence of Last-Used Range Per Listing
Given a signed-in user adjusts the control When the value changes Then an analytics event “price_adjustment_changed” is emitted with listing_id, user_id, base_list_price, source (slider|percent_input|dollar_input), previous_percent, new_percent, previous_dollar, new_dollar, and ISO-8601 timestamp And events are throttled to at most one per 150ms and contain no PII beyond IDs And events retry on transient failures and are verifiable in the analytics QA environment And the last selected percent for this listing and user is persisted server-side And on subsequent visits (and across devices), the persisted value restores the control state And if the active list price has changed since persistence, the percent is restored and the dollar delta recalculates from the new base And users can reset to default (0.5%) via a clear action
Predictive Elasticity Model
"As a broker-owner, I want reliable projections with confidence bands so that I can advise agents and sellers with quantified risk and transparency."
Description

Deliver a production inference service that projects showings lift (%), days-on-market reduction (days), and offer likelihood (%) for each candidate price delta. Ingest listing features (beds, baths, SQFT, price band, photos), TourEcho showing/feedback signals (volume, sentiment, objections), local market comps, and seasonality/zip-level demand indices. Output central estimates with 80–95% confidence bands and a model confidence score; fall back to market-level priors when listing-specific data is sparse. Enforce model versioning, feature governance, and explainability (top drivers at each delta). Meet P95 latency < 2s per curve and support batch precomputation for common deltas. Instrument for drift monitoring and calibration checks.

Acceptance Criteria
Real-time Elasticity Curve Inference (API)
Given a valid request containing listing_id or full feature payload and requested price deltas in 0.5% increments from −5% to +5% When POST /v1/elasticity/infer is invoked Then the service returns HTTP 200 with a JSON payload that includes, for each delta: showings_lift_pct, dom_reduction_days, offer_likelihood_pct, conf_band_80 [lo, hi], conf_band_95 [lo, hi], model_confidence_score, model_version, request_id And all numeric fields are finite; offer_likelihood_pct ∈ [0,100]; confidence bands satisfy lo <= central <= hi for both 80% and 95% And if listing_id is provided, the service resolves required features, comps, and demand indices with data_freshness_hours <= 48; if not met, data_freshness_ok=false is returned and the last available snapshot is used without 5xx
P95 Latency and Throughput SLOs
Given a mixed workload of 100 RPS with representative payload sizes and delta counts producing a full curve When the service runs in the production environment for 10 consecutive minutes Then P95 end-to-end latency per curve is < 2.0 seconds and P99 < 3.0 seconds And successful response rate is >= 99.9% with no timeouts at the application layer And resource autoscaling maintains SLOs without manual intervention
Sparse Data Fallback to Market Priors
Given a listing with sparse signals (e.g., <3 showings in past 14 days or missing photos_count) or missing key features When inference is requested Then the response includes fallback=true and prior_source identifying the market-level prior used (e.g., zip, price_band) And model_confidence_score <= 0.4 and confidence bands widen (95% band width >= 1.5x median width of non-fallback cases) And no 5xx is returned; HTTP 200 with warnings[] including "sparse_signals"
Explainability: Top Drivers per Delta
Given an inference that is not a pure fallback When returning the curve Then each delta includes top_drivers (for offer_likelihood_pct) with at least 3 entries: feature_name, contribution, direction And the signed sum of contributions for a given delta approximates (offer_likelihood_pct at delta − offer_likelihood_pct at 0%) within ±5% relative error And top_drivers are deterministic for identical inputs (same order and values within ±1e−6)
Model Versioning, Feature Governance, and Schema Enforcement
Given any request to the inference endpoint When the payload is validated Then unknown or out-of-range features are rejected with HTTP 422 and a structured error listing field, issue, expected_range/type And accepted requests are logged with feature_version, model_version, and transformation_version; responses always include model_version And deploying a new model version supports canary routing (≥5% traffic) and rollback within 5 minutes; only one active production version is default-routed at any time
Batch Precomputation and Caching for Common Deltas
Given the common delta set S = {−5.0, −4.5, …, 0.0, …, +4.5, +5.0} percent for all active listings When the nightly batch job runs Then 99% of listings have precomputed curves in cache by 06:00 local market time with TTL=24h And cache hit rate during 08:00–20:00 local time is >= 80% for requests matching S And when a listing’s price changes or new feedback arrives, its cache is invalidated and recomputed within 5 minutes
Drift Monitoring and Calibration
Given daily production data and weekly labeled outcomes where available When monitoring jobs execute Then population/data drift metrics (e.g., PSI >= 0.2 or KS p-value < 0.01) raise a pager alert within 15 minutes of detection And 80% and 95% confidence bands achieve empirical coverage within ±5% of nominal on the latest monthly backtest And a retraining recommendation is emitted when drift persists for 7 consecutive days or calibration error exceeds threshold; reports are stored and accessible via /metrics
Interactive Multi-Metric Graph
"As a listing agent, I want an interactive graph that clearly shows how each metric changes with price so that I can quickly compare trade-offs and explain them to my seller."
Description

Render an interactive curve with price change (%) on the x-axis and three toggleable series—showings lift, DOM reduction, and offer likelihood—on the y-axis using dual-axis scaling when needed. Provide hover/click tooltips with point-in-time metrics, absolute dollar equivalents, and confidence intervals. Support zoom/reset, series on/off toggles, and colorblind-safe palettes. Ensure responsive performance on desktop and mobile, with P95 render < 300ms for a 10–20 point curve. Integrate with the listing detail page, receiving data from the Predictive Elasticity Model via a typed API contract.

Acceptance Criteria
Render Multi-Metric Dual-Axis Curve
Given a listing with elasticity data for 10–20 price deltas between 0.5% and 5.0% sourced from the Predictive Elasticity Model When the Interactive Multi-Metric Graph renders on the listing detail page Then the x-axis displays "Price Change (%)" spanning the min/max deltas present without extrapolation And three series are plotted: Showings Lift (%), Days on Market Reduction (days), Offer Likelihood (%) And percentage series (Showings Lift, Offer Likelihood) share the left y-axis (%) and DOM Reduction uses the right y-axis (days) And both y-axes auto-scale to visible data with 10% visual headroom And data points are connected with a smooth curve without interpolating across missing points And axis titles and tick labels do not collide with the legend at viewport widths ≥ 1024px
Series Toggle Controls
Given the graph is rendered with a legend containing three toggle controls for each series When a user toggles a series off via the legend Then the series line, points, and CI band are hidden and the relevant y-axis rescales to remaining visible series And at least one series remains enabled (prevent all-off state with a disabled toggle if only one is left) And toggling the last percentage series hides the left y-axis; toggling off DOM Reduction hides the right y-axis And the toggle state persists until page unload and survives zoom/reset actions within the session And the legend reflects current visibility state with clear on/off indicators
Point Tooltips with Dollar Equivalents and CIs
Given the base list price and currency code are available from page context and the graph has at least one visible series When a user hovers a point (desktop) or taps a point (mobile) Then a tooltip appears showing: price change %; absolute dollar change computed from base price (rounded to nearest $100, localized); values for each visible series at that point; and confidence interval bounds (low/high) for each visible series when provided by the API And CI is additionally visualized as a semi-transparent band around each series when CI is present and that series is enabled And the tooltip remains within the viewport, follows the pointer/focus, and dismisses on pointer out or second tap And keyboard focus on a data point (via arrow keys/Tab) displays the same tooltip content
Zoom and Reset Interactions
Given the graph is in the default view with the full x-axis range visible When the user drag-selects an x-range (desktop) or pinch-zooms horizontally (mobile) Then the x-axis zooms to the selected range and both y-axes rescale to the visible data only And double-click (desktop) or double-tap (mobile) on the plot area resets to the full range And a visible Reset Zoom control resets to the full range when activated via click or keyboard And zoom and reset operations complete within 150ms at P95 for a 10–20 point dataset
Responsive Performance and P95 Render Target
Given a 10–20 point elasticity dataset and cold cache When the graph component mounts on desktop (≥1024px wide) or mobile (≤768px wide) Then the graph renders to first complete frame within 300ms at P95 over 200 measured loads per form factor And no single main-thread task exceeds 100ms during initial render at P95 And the plot and legend remain readable and usable at breakpoints 320px, 375px, 768px, and 1024px without horizontal scrolling
Colorblind-Safe Palette and A11y
Given default and high-contrast themes are enabled When the graph renders with any combination of the three series Then each series uses a colorblind-safe palette and distinct line style (solid/dashed/dotted) and/or marker shape And the contrast ratio between series lines/markers and background is ≥ 3:1 And legend items, toggles, and Reset Zoom are keyboard focusable, operable via Enter/Space, and expose accessible names/roles And a screen-reader-friendly tabular summary of the current viewport data is available via an "Accessibility Data" control
Typed API Integration and Error Handling
Given a typed API contract for the Predictive Elasticity Model response is implemented When data is fetched and passes schema validation Then the graph renders using the provided points, units, confidence interval bounds, and baseListPrice/currencyCode from page context or API per contract When schema validation fails or the request errors Then a non-blocking error state "Elasticity data unavailable" with a Retry action is shown and the page remains usable When the dataset is empty or has fewer than 2 points Then show an "Insufficient data" state in place of the graph And while fetching, a skeleton placeholder is displayed; on retry success, the graph appears without a full page reload And all error outcomes are logged with listingId and response status for diagnostics
Sweet Spot Recommendation & Pinning
"As a listing agent, I want the system to recommend and let me pin the best price adjustment so that I can move forward confidently and justify the change to my seller."
Description

Compute and highlight an optimal price adjustment that maximizes a configurable utility function combining offer likelihood, showings lift, and DOM reduction, with sensible defaults and guardrails to avoid over-cutting (e.g., minimum net proceeds threshold). Allow one-click pinning of the recommended point, manual override with notes, and storage of the pinned state on the listing record. Generate an agent-facing and seller-facing rationale that summarizes key drivers and confidence. Surface warnings when the curve is flat/low-confidence. Expose a lightweight API to retrieve the current pinned recommendation for downstream reporting.

Acceptance Criteria
Configurable Utility Function & Defaults
Given no explicit weight configuration at account or listing level When computing the sweet spot utility Then platform default non-negative weights that sum to 1.0 are applied Given account- or listing-level weights are configured When computing the sweet spot utility Then those weights are used and recorded on the recommendation object (weights_source, weights_values) Given invalid weights are supplied (negative values or sum != 1.0) When computing the sweet spot utility Then the system falls back to platform defaults and records weights_source=default and a weights_error flag Given the utility combines offer likelihood, showings lift, and DOM reduction When evaluating candidate price cuts Then increasing any of these inputs with other inputs constant must not decrease utility
Optimal Price Recommendation Within Bounds & Guardrails
Given an elasticity curve is available for price cuts from 0.5% to 5.0% (inclusive) When optimizing utility across candidate cuts Then the recommended cut lies within this range Given a minimum net proceeds threshold is configured When evaluating candidate cuts Then any candidate that violates the threshold is excluded from consideration Given all candidates violate the minimum net proceeds threshold When optimizing utility Then the system recommends No change and surfaces a Guardrail: min proceeds warning Given multiple candidate cuts are within 0.25 utility points of the top utility When selecting the final recommendation Then choose the smallest absolute cut among those candidates Given the current price lies within 0.25% of the computed sweet spot When finalizing the recommendation Then set recommendation to No change and include rationale indicating proximity
One-Click Pinning & Persistence
Given a computed recommendation is displayed When the user clicks Pin Sweet Spot Then the system saves a pinned record on the listing including price, delta_pct, basis=auto, weights, confidence, created_by, created_at and sets pinned=true Given a pinned recommendation exists When the page is refreshed or the user signs in from another device Then the pinned state and metadata persist and the pinned point is visually indicated on the curve Given a pinned recommendation exists When a user attempts to pin a different point Then the user is prompted to Confirm override or Cancel; on confirm the prior pin is superseded and an audit entry is recorded
Manual Override With Required Note & Audit Trail
Given a listing with a displayed curve When the user selects Override & Pin Custom Then a note field is required and Save remains disabled until a non-empty note is entered When the manual pin is saved Then the system stores basis=manual, chosen price and delta_pct, note text, user id, timestamp, and the previously computed auto recommendation (price, delta_pct) Then an audit event is created capturing before/after values, note, user, and timestamp and is retrievable in the activity log and via API Then all views of the recommendation label the state as Manual and show the note
Dual Rationale Generation (Agent-Facing and Seller-Facing)
Given a pinned or current (unpinned) recommendation exists When generating rationale Then two variants are produced: agent_facing and seller_facing Agent-facing rationale includes numeric projections for showings lift, DOM reduction, and offer likelihood, the weights used, confidence score/band, and any guardrail exclusions count Seller-facing rationale presents the same drivers in plain language without internal jargon or weight values and includes a clear confidence label (e.g., High/Med/Low) Both variants include the recommended price, delta_pct, one-sentence summary of why now, and are generated in under 200 ms p95
Flat Curve and Low-Confidence Warnings
Given the maximum utility improvement versus current price is below the configured flat_threshold (default 1.0%) When rendering the curve and recommendation Then a Flat curve warning is displayed with tooltip explaining low expected benefit Given the confidence score is below the configured confidence_threshold (default 0.60) When rendering the recommendation and rationale Then a Low confidence warning is displayed with guidance to gather more data Then both warning flags (flat_curve, low_confidence) are stored on the recommendation object and shown consistently in agent- and seller-facing views and via API
Lightweight API: Retrieve Current Pinned Recommendation
Given a listing has a pinned recommendation When a client GETs /api/v1/listings/{listing_id}/sweet-spot Then respond 200 with JSON including price, delta_pct, basis (auto|manual), pinned=true, pinned_by, pinned_at, confidence, weights, guardrail_flags, rationale_summary Given a listing has no pinned recommendation When a client GETs the same endpoint Then respond 200 with the latest computed recommendation and pinned=false When the request lacks valid OAuth2 bearer token Then respond 401; when the caller lacks access to the listing respond 403; when the listing id is not found respond 404 Then the API meets p95 latency <= 300 ms under 10 concurrent requests and includes a schema_version in the response
Scenario Save & Share
"As an agent, I want to save and share a specific elasticity curve with my seller so that we can align on a pricing move asynchronously and keep a record of what we reviewed."
Description

Enable users to save named curve snapshots including model version, weights, selected range, pinned point, and timestamp. Provide side-by-side comparison of saved scenarios and a shareable, read-only web link for sellers with optional PDF export. Respect RBAC: agents can share their listings; broker-owners can view team listings; sellers only see shared snapshots without underlying comps. Track opens and comments, and write an audit trail to the listing activity feed. Integrate into the TourEcho listing dashboard for quick retrieval.

Acceptance Criteria
Save Snapshot with Full Metadata
Given I am an authenticated agent on a listing with an active Elasticity Curve When I click “Save Snapshot”, enter a unique name (1–80 chars), and confirm Then the system persists an immutable snapshot with: listingId, snapshotId, name, modelVersion, modelWeightsHash, selectedRange(min,max), pinnedPoint(price, key metrics), and timestamp (UTC ISO‑8601), plus actorId And the snapshot appears in the listing’s Scenarios panel within 2 seconds And attempting to reuse an existing name on the same listing shows an inline validation error and prevents save And subsequent changes to the curve do not alter previously saved snapshots
Side-by-Side Comparison of Saved Scenarios
Given a listing has at least two saved snapshots When I select up to four snapshots and click “Compare” Then a comparison view renders each snapshot’s name, model version, selected range, pinned point, and key metrics aligned on the same axes And visual deltas between snapshots are highlighted without page reload And toggling snapshots in or out updates the comparison within 500 ms And closing the comparison returns me to the listing dashboard state I came from
Generate Read‑Only Seller Share Link with PDF
Given I have a saved snapshot and share permission on the listing When I click “Share”, choose options, and generate a link Then a tokenized, read‑only URL is created and copied to clipboard on request And visiting the URL shows only the snapshot view (no edit controls, no underlying comps) And, if “Include PDF export” is enabled at share creation, the share view displays a PDF export button that produces a PDF matching on‑screen values and excluding comps And visiting the URL with an invalid token returns HTTP 403 And the share view is responsive and loads in ≤3 seconds on a standard broadband connection
RBAC Enforcement for Agents, Broker‑Owners, and Sellers
Given platform roles are configured When an Agent accesses their own listing’s Scenarios panel Then they can create, view, compare, and share snapshots for that listing When a Broker‑Owner accesses a team listing Then they can view and compare snapshots across their team’s listings And share controls are disabled unless they are also the assigned listing agent When a Seller accesses a valid share link Then they can view the shared snapshot and export PDF (if enabled) but cannot edit, compare, or view underlying comps And any attempt to access snapshots without appropriate permissions returns HTTP 403
Open and Comment Tracking on Share Link
Given a seller opens a valid share link When the snapshot view loads Then an open event is recorded with snapshotId, linkId, timestamp, and userAgent And unique open count increments once per device/browser When the seller submits a comment between 1 and 1000 characters Then the comment is stored with snapshotId, linkId, timestamp, and comment text And the comment becomes visible in the listing dashboard’s Scenarios comments thread
Audit Trail Entries in Listing Activity Feed
Given audit logging is enabled When a snapshot is saved, a share link is created, a share link is opened, a comment is added, or a PDF is exported Then an entry is appended to the listing’s activity feed with action type, snapshot name/id, actor (if available), and timestamp And feed entries are ordered newest‑first and filterable by “Scenarios” And clicking a feed entry navigates to the relevant snapshot or share analytics view
Dashboard Retrieval and Quick Access
Given I am on the TourEcho listing dashboard When I open the Scenarios panel Then I see a sortable, searchable list of saved snapshots showing name, date saved, model version, pinned price, and share status And selecting a snapshot loads it within 2 seconds and in no more than 2 clicks from the dashboard And multi‑select enables the Compare action And all fields (name, model version, selected range, pinned point, timestamp) display exactly as saved
Real-Time Recalculation & Data Freshness
"As a listing agent, I want the curve to refresh when new data arrives so that my pricing advice always reflects the latest market signals."
Description

Automatically trigger curve recalculation when new showings, feedback sentiment shifts, MLS status/price updates, or market index changes are ingested. Provide a manual refresh action and display last-updated timestamp and data sources used. Implement caching with intelligent invalidation and a freshness SLA (e.g., < 24h for market indices, < 1h for TourEcho showings). Precompute curves for active listings nightly and notify the agent when the sweet spot meaningfully changes. Log all recalculation events for observability.

Acceptance Criteria
Auto Recalc on New Showings/Sentiment Shift
Given an active listing with an existing elasticity curve And the system is connected to TourEcho showing and feedback streams When a new showing event for that listing is ingested Or the aggregated feedback sentiment for that listing changes by >= 0.2 on the -1..+1 scale Then a recalculation job is enqueued within 30 seconds And p95 recalculation completion time is <= 3 minutes And the curve version increments and the last-updated timestamp equals the job completion time And the curve reflects the new showing count and sentiment inputs (verifiable via the Data Sources panel) And any cached curve for that listing is invalidated and viewers see the updated curve within 15 seconds of job completion
Auto Recalc on MLS Price/Status Update
Given an active listing synced to an MLS feed When the MLS feed ingests a price change for that listing Then a recalculation job is enqueued within 30 seconds and completes p95 <= 5 minutes And the curve baseline price equals the new MLS list price And the last-updated timestamp and Data Sources indicate MLS as a source with the new price and timestamp When the MLS status changes for that listing (e.g., Active → Pending/Sold) Then a recalculation event is executed and logged with trigger_type = "mls_status_update" And the curve’s last-updated timestamp reflects the status-change recalculation
Freshness SLA and Cache Invalidation
Given a user views the elasticity curve for a listing Then TourEcho showings and sentiment data used are no older than 60 minutes at render time And market index data used are no older than 24 hours at render time And if either data domain exceeds its SLA, an automatic refresh is triggered and a Stale badge with the data timestamp is displayed until completion And for any new event affecting the listing or market indices, cache entries for that listing are invalidated within 10 seconds; otherwise they expire by TTL and never exceed the SLA window
Manual Refresh, Timestamp, and Data Sources
Given a user is viewing the elasticity curve When the user clicks Refresh Then a recalculation starts immediately and the Refresh control shows a loading state And on success, the Last Updated timestamp reflects the job completion time in the user’s timezone and is strictly greater than or equal to the previous value And a Data Sources panel lists each source with its data timestamp: TourEcho Showings, Feedback Sentiment, MLS, Market Index, and cache status (hit/miss) And if no newer data are available, the UI displays Up to date and the Last Updated timestamp remains unchanged And on error, an error message is shown and no partial updates are applied
Nightly Precompute and Sweet-Spot Change Notification
Given the nightly window 02:00–05:00 in the account’s timezone When the nightly precompute job runs Then 99% of active listings complete recalculation by 05:00 with p95 per-listing compute time <= 30 seconds And for any listing where the sweet-spot price shifts by >= 0.25% of current list price OR the predicted offer likelihood at the sweet spot changes by >= 5 percentage points, the assigned agent receives a single notification within 10 minutes including before/after values And duplicate notifications for the same curve version are suppressed for 24 hours And failed computations are retried up to 3 times with exponential backoff and surfaced in logs
Recalculation Event Logging and Auditability
Given any recalculation is executed (auto, manual, or nightly) Then an event record is written with fields: event_id, listing_id, trigger_type, curve_version_before, curve_version_after, enqueue_time, start_time, end_time, duration_ms, cache_hit, data_versions (showings_timestamp, sentiment_version, mls_version, market_index_version), sweet_spot_before, sweet_spot_after, status (success|error), error_message (optional) And events are queryable by listing_id and time range via an internal endpoint with p95 query latency <= 5 seconds for up to 10,000 events And logs are retained for at least 30 days And 99.9% of recalculation attempts produce a corresponding event record
Market Index Update Batched Recalculation
Given a new market index snapshot is ingested When the snapshot is validated and published Then impacted active listings have recalculation jobs enqueued in batches within 5 minutes And batching respects rate limits of <= 100 jobs/min per worker to protect system stability And 95% of impacted listings complete recalculation within 3 hours of index publish time And no elasticity curve rendered after publish time uses market index data older than 24 hours And a global event is logged with the market_index_version and affected_count
Outcome Tracking & Model Feedback Loop
"As a broker-owner, I want to see how accurate the projections were after a price change so that we can refine the model and build trust with sellers over time."
Description

After a pinned price change is enacted, track realized outcomes (showings delta, DOM actual, offers received) versus projections and visualize variance at the listing and portfolio levels. Attribute causality windows to avoid confounding events (e.g., staging done, photo updates). Feed labeled outcomes back into training/evaluation to improve calibration and segment performance (price band, neighborhood). Provide accuracy dashboards and alerts for drift. Respect privacy by aggregating where necessary and honoring data retention policies.

Acceptance Criteria
Listing-level Projection vs Actual Variance Visualization
Given an agent pins a new price on the Elasticity Curve and confirms the price change is enacted (via MLS sync or manual confirmation) When the default outcome window of 14 days post-enactment completes or the listing closes earlier Then the system computes and displays on the listing analytics page: - Projected vs actual for: showings delta (post-window total minus baseline total, normalized per day), DOM actual (upon close/off-market), and offers count - Absolute and percentage variance for each metric, with 95% confidence interval around the projection - The price-change timestamp, baseline window (default 7 days pre), and outcome window (default 14 days post) used - A data sufficiency badge (Sufficient if >=5 measured days in each window; else Insufficient) And the variance visualization renders without error within 2 seconds on broadband. And if the user adjusts baseline (3–14 days) or outcome (7–28 days) windows, all values recompute within 1 second.
Portfolio-level Accuracy and Variance Rollup
Given a broker/owner opens the portfolio accuracy dashboard for a selected date range When at least 20 listings have completed outcome windows in that range Then the dashboard shows: - Coverage: count of price changes with outcomes / total price changes enacted - Aggregated accuracy metrics: MAPE for showings delta, MAE for DOM, and Brier score for offer likelihood - Filters: office, agent, neighborhood, price band, and date range - Outlier table: top 5 overestimates and top 5 underestimates by absolute percentage error And all aggregates recompute within 5 seconds upon any filter change. And the user can export the current view as CSV with the same filters applied.
Causality Window Attribution and Confounder Handling
Given a price change enactment event has occurred When any confounding event (staging completion, new media upload, open house, status change, or bulk marketing campaign) is detected within ±3 days of enactment Then the system flags the window as Confounded, excludes it from model training by default, and prompts the user to either: - Adjust the outcome window (shift start by +3 days or extend to 21 days), or - Confirm inclusion with a required reason And the final window definition, confounder type(s), decision, timestamp, and user id are stored in an audit log. And if no confounders are detected, users may adjust baseline (3–14 days) and outcome (7–28 days) windows; recalculations update within 1 second.
Outcome Labeling and Model Feedback Ingestion
Given an outcome window is completed and marked Eligible (not Confounded or overridden to include with reason) When the nightly pipeline runs Then a single labeled record is written with keys {listing_id, price_change_at}, containing: model_version, segment tags (price band, neighborhood), projection values, actual outcomes (showings delta, DOM, offers), window metadata, and quality flags. And duplicates on {listing_id, price_change_at} are rejected; missing required fields cause the record to be quarantined with an error logged. And successful writes achieve ≥99.9% over a rolling 30-day period, with pipeline latency < 2 hours from window completion.
Segmented Calibration Metrics (Price Band & Neighborhood)
Given a user opens the calibration dashboard and selects a segment (price band or neighborhood) and date range When the dashboard loads Then it displays for the selected segment: - Calibration curve for offer likelihood with Brier score and Expected Calibration Error (ECE) - MAPE for showings delta and MAE for DOM - A minimum cohort size badge; if n < 30, show 'Insufficient data' and suppress curves and summary metrics And metrics reflect only listings within the selected filters and the latest model version unless the user chooses a different version from a version selector.
Drift Detection Alerts and Dashboard
Given weekly production monitoring is enabled When any of the following are observed in the last 7 days relative to the prior 8-week baseline: - Population Stability Index (PSI) > 0.3 on any top-10 feature, or - KS test p-value < 0.01 on prediction distributions, or - Rolling 2-week MAPE for showings delta increases by ≥30% in any segment with n ≥ 50 Then an alert is sent to the ML owner and product channel with a link to the drift report, and the dashboard highlights affected features/segments. And an incident ticket is opened automatically with the metrics, affected segments, model version, and start time attached.
Privacy, Aggregation, and Data Retention Compliance
Rule-oriented: - Portfolio metrics are displayed only for cohorts with k ≥ 10 listings; otherwise show 'Insufficient cohort size' and suppress values - Users may view listing-level outcomes only for listings they own or are granted via brokerage role-based access - Training/evaluation datasets store only pseudonymous listing identifiers; no consumer PII is persisted - Raw event data older than 180 days is purged; labeled outcomes older than 24 months are aggregated to monthly and detached from listing_id - Purge jobs log deletions with counts and timestamps; audit logs retained for 24 months - Account-level deletion requests are honored within 7 days and propagate to training/eval stores; a verification export is available on request

Scenario Compare

Save multiple price‑move scenarios (e.g., −1%, −2.5%, −5%) and compare side‑by‑side KPIs. Generate a seller‑ready link or PDF with plain‑language rationale, speeding alignment and approvals while reducing back‑and‑forth.

Requirements

Scenario Creation & Management
"As a listing agent, I want to create, label, and save multiple price‑move scenarios so that I can quickly iterate and revisit options without rebuilding them."
Description

Enable agents to create, label, and manage multiple price‑move scenarios per listing. Each scenario supports absolute ($) and percentage (%) price adjustments with automatic recalculation of target list price, custom labels, notes, and objectives. Provide baseline metric snapshots at creation time to freeze comparisons, with optional per‑scenario assumptions (e.g., staging change, open house planned). Include autosave, duplicate, rename, and delete, plus version history and team collaboration permissions aligned to TourEcho roles. Integrate with the listing record so new showings/feedback trigger non‑destructive recalculation with an "as‑of" timestamp to preserve prior analyses.

Acceptance Criteria
Create Scenario with $ and % Adjustments
Given an active listing with a current list price, When the agent selects Dollar adjustment and enters a value (e.g., -5000), Then the target list price recalculates immediately using base price + adjustment and displays in currency format. Given an active listing with a current list price, When the agent selects Percentage adjustment and enters a value (e.g., -2.5%), Then the target list price recalculates immediately using base price × (1 + percent) and displays in currency format with 2 decimal places. Given a scenario in edit mode, When the agent switches between Dollar and Percentage modes, Then the equivalent value is converted and synced without loss of precision and the target list price remains consistent. Given the agent enters a target list price that is less than or equal to 0, When attempting to save, Then the system blocks the save and shows a validation message indicating the price must be greater than 0. Given the agent enters scenario metadata, When saving, Then Label (1–60 chars, unique per listing) is required and Notes (≤1000 chars) and Objectives (≤300 chars) are optional and persisted. Given the agent saves a valid scenario, When returning to the listing’s Scenario list, Then the new scenario appears with label, adjustment type/value, and target list price.
Baseline Metrics Snapshot Freeze at Creation
Given a new scenario is saved, When the save completes, Then the system captures and stores a read-only baseline snapshot including showings-to-date, average feedback sentiment, room-level objections distribution, days on market, and list-to-showing conversion, with a Baseline as-of timestamp. Given listing KPIs change after scenario creation, When viewing the scenario’s baseline values, Then the baseline values remain unchanged and continue to reflect the original Baseline as-of timestamp. Given the scenario is displayed in Compare view, When baseline metrics are rendered, Then the values used are the stored snapshot values and not recalculated live values. Given an API consumer requests the scenario, When fetching scenario details, Then the payload includes the baseline snapshot block and its as-of timestamp.
Per-Scenario Assumptions Capture
Given a scenario is in edit mode, When the agent adds an assumption (e.g., Staging change on 2025-09-15), Then the assumption is saved with type, value, and optional date and is displayed on the scenario detail. Given a scenario has assumptions, When the agent saves and reloads the page, Then all assumptions persist and appear in the same order. Given a scenario has multiple assumptions, When the agent adds assumptions beyond the limit, Then the system blocks the addition and shows a validation message (max 10 assumptions; each 1–100 chars). Given assumptions exist, When generating a compare view, Then each scenario shows its assumptions badge/count and a tooltip or panel lists the assumptions.
Autosave and Version History
Given a scenario is being edited, When the agent modifies any field (adjustment, label, notes, objectives, assumptions), Then an autosave occurs within 2 seconds of inactivity or on blur/navigation and no data is lost on page reload. Given autosave occurs, When viewing version history, Then a new version entry is recorded with timestamp, editor identity, and a summary of changed fields. Given version history is available, When the agent selects a prior version and clicks Restore, Then the system creates a new version with the current state and sets the restored version as the active state. Given version history retention is enforced, When more than 20 versions exist, Then the system retains at least the last 20 versions and prunes older ones beyond the retention policy.
Duplicate, Rename, and Delete Scenario
Given a scenario exists, When the agent clicks Duplicate, Then a new scenario is created with identical fields (including adjustments, notes, objectives, and assumptions) and a fresh baseline snapshot captured at duplication time with label defaulting to "Copy of <Original>" (or "Copy (n) of <Original>" if a conflict exists). Given a scenario is renamed, When the agent enters a label that duplicates another scenario label for the same listing, Then the system blocks the rename and shows a uniqueness validation error. Given a scenario exists, When the agent clicks Delete and confirms, Then the scenario is removed from the listing’s Scenario list and compare views and cannot be accessed via direct URL or API. Given a scenario is duplicated or deleted, When viewing audit logs, Then the action is recorded with timestamp and actor.
Team Collaboration Permissions
Given TourEcho role permissions are enforced, When a user with Edit permission on the listing attempts to create, modify, duplicate, or delete a scenario, Then the action succeeds and is attributed to that user. Given TourEcho role permissions are enforced, When a user with View-only permission attempts to create, modify, duplicate, or delete a scenario, Then the action is blocked with a 403/permission error and no data changes occur. Given multiple editors open the same scenario, When User B saves changes after User A has saved newer changes, Then the system applies last-write-wins at field level and notifies User B of a conflict with an option to reload the latest state. Given audit requirements, When any scenario change occurs, Then an audit entry is written containing user, action, timestamp, and affected fields.
Non-Destructive Recalculation with As-Of Timestamp
Given a scenario with a baseline snapshot exists, When new showings or feedback are ingested for the listing, Then the system recalculates scenario-derived KPIs without altering the stored baseline and records a Latest as-of timestamp for the recalculation. Given both baseline and latest values exist, When the agent views the scenario, Then the UI allows toggling between Baseline (as of <timestamp>) and Latest (as of <timestamp>) metrics and both timestamps are visible. Given recalculation is event-driven, When new data arrives, Then updated metrics become available within 1 minute and a non-destructive recalculation event is logged in version history. Given recalculation occurs, When exporting or sharing scenario data via API, Then both baseline snapshot and latest recalculated metrics are available with their respective as-of timestamps.
KPI Modeling Engine
"As a listing agent, I want modeled KPIs for each price scenario so that I can understand the likely impact on showings, offers, and time on market."
Description

Compute per‑scenario projections using TourEcho data (showings volume, engagement cadence, feedback sentiment, room‑level objections) and market signals (recent comps, segment velocity, seasonality). Model price elasticity to estimate changes to showings/week, time‑to‑first‑offer, DOM percentile, offer probability within 7/14/30 days, and expected list‑to‑sale spread. Provide confidence bands and sensitivity toggles for key assumptions (marketing push, weekend open house). Expose a deterministic API for UI consumption, cache results, and mark outputs with data currency and inputs used to ensure traceability.

Acceptance Criteria
Per-Scenario KPI Projection Computation
Given a listing_id, baseline list_price, and a price_delta for a scenario When the engine computes projections Then it returns for the scenario: showings_per_week, time_to_first_offer_days, dom_percentile, offer_prob_7d, offer_prob_14d, offer_prob_30d, expected_list_to_sale_spread_percent And showings_per_week >= 0 and time_to_first_offer_days >= 0 And 0 <= dom_percentile <= 100 And 0 <= offer_prob_7d <= 1 and 0 <= offer_prob_14d <= 1 and 0 <= offer_prob_30d <= 1 And expected_list_to_sale_spread_percent is between -50 and 50 And p95 compute latency per scenario is <= 800 ms
Price Elasticity Behavior and KPI Constraints
Given a set of scenarios with progressively lower price_delta values (for example 0%, -1%, -2.5%, -5%) When projections are generated Then showings_per_week is non-decreasing across scenarios within a tolerance of 1% And offer_prob_7d, offer_prob_14d, and offer_prob_30d are each non-decreasing across scenarios within a tolerance of 0.5 percentage points And time_to_first_offer_days is non-increasing across scenarios within a tolerance of 1 day Given scenarios with higher price_delta values (price increases) When projections are generated Then the above monotonicities invert under the same tolerances
Confidence Bands and Sensitivity Toggles
Given a scenario compute request When projections are returned Then each KPI includes confidence intervals at 80% and 95% levels (low and high bounds) And the request may include marketing_push and weekend_open_house toggles as booleans When marketing_push or weekend_open_house is true Then deterministic uplift coefficients are applied to affected KPIs and the applied coefficients are returned in assumptions.uplifts And if toggles are omitted they default to false and are consistent across all scenarios in a batch
Deterministic Modeling API Contract
Given identical inputs (listing_id, price_delta set, toggles, model_version) in the same environment When the API is called multiple times Then all numeric outputs are identical within 1e-9 and the order of scenarios in the response matches the request Given malformed inputs When the API is called Then it returns HTTP 400 with a JSON error detailing invalid fields Given a price_delta outside the supported range [-0.20, 0.10] When the API is called Then it returns HTTP 422 with a validation error Given required upstream data sources are unavailable When the API is called Then it returns HTTP 503 with a machine-readable retryability indicator And all successful responses use Content-Type application/json and UTC ISO-8601 datetimes
Result Caching and Invalidation
Given a request that matches a previous compute exactly (same input signature) When called within TTL Then the response is served from cache with cache_hit true, computed_at_utc unchanged, and p95 latency <= 150 ms Given the TTL of 24 hours elapses without data changes When called Then the engine recomputes and sets cache_hit false and updates computed_at_utc Given upstream data currency advances for the listing (new comps or new TourEcho feedback) When detected Then cached entries for the affected listing are invalidated within 5 minutes and the next request recomputes
Traceability and Data Currency Tagging
Given projections are returned When inspecting the response Then it includes: model_version, model_git_sha, run_id, input_signature, computed_at_utc And a data_currency object with per-source timestamps for tourecho_showings, engagement_cadence, feedback_sentiment, room_level_objections, comps, segment_velocity, seasonality And an inputs_used object enumerating sources and sample sizes (for example showings_count_last_14d, comps_count) And an assumptions object with toggle values and uplift coefficients And all timestamps are UTC ISO-8601
Multi-Scenario Comparison Export Readiness
Given multiple scenarios are submitted in one request including a baseline When projections are returned Then scenarios are sorted by price_delta ascending and all share the same assumptions and model_version And for each scenario the response includes deltas_vs_baseline for each KPI And numerical rounding is applied consistently: showings_per_week 1 decimal, time_to_first_offer_days integer days, dom_percentile 1 decimal, offer_prob_7d/14d/30d as percent with 0.1% precision, expected_list_to_sale_spread_percent 0.1% precision And a compare_ready structure provides labels and units suitable for PDF or link generation
Side‑by‑Side Comparison View
"As a broker‑owner, I want to compare scenarios side‑by‑side so that I can quickly see which option best meets our goals."
Description

Present scenarios as columns with a configurable KPI row set (e.g., showings/week, offer probability, projected DOM, objection reduction index, expected spread). Pin the current price as baseline and highlight deltas with color/arrow indicators and tooltips defining each metric. Support 3–6 scenarios, responsive horizontal scrolling on mobile, column reordering, and KPI show/hide. Provide export‑consistent layout that maps 1:1 to PDF. Ensure keyboard navigation and screen reader labels for accessibility.

Acceptance Criteria
Columnar Comparison Grid with Configurable KPI Rows
Given 3–6 saved scenarios and default KPIs (showings/week, offer probability, projected DOM, objection reduction index, expected spread) When the comparison view loads Then each scenario renders as a distinct column with equal width and aligned KPI rows across columns And all KPI values display using standardized formats (percentages to 0.1%, integers with thousands separators, days as whole numbers) And the grid renders in under 1500 ms with up to 6 scenarios and 5 KPIs on a median device
Baseline Pin and Delta Visualization with Metric Tooltips
Given the current price scenario exists When the view renders Then that scenario is pinned as the first column labeled "Baseline" and cannot be reordered And for each non-baseline column, each KPI shows a delta vs baseline with arrow indicator (up=improvement, down=decline, flat=no change) and color (green=improve, red=decline, gray=flat) with contrast ≥ 4.5:1 And focusing or hovering the KPI info icon shows a tooltip within 300 ms that defines the metric and delta calculation And delta values are consistent and rounded (percent to 0.1, counts/days to whole numbers)
Scenario Count Limits and Mobile Horizontal Scrolling
Given the user selects scenarios When fewer than 3 or more than 6 scenarios are active Then an inline notice instructs to select 3–6 scenarios and the Export action is disabled until within range Given a viewport width ≤ 768 px When the comparison view loads Then columns are horizontally scrollable with visible scrollbar or swipe, without vertical layout breakage And at least one column is fully visible on load
Column Reordering Controls
Given 3–6 scenarios are displayed When the user drags a non-baseline column header Then the column reorders to the drop position with a placeholder and no data loss And the baseline column remains locked in position 1 And with keyboard focus on a non-baseline column header, pressing Shift+Left/Right moves the column one position left/right and announces the new position via ARIA live
KPI Show/Hide Configuration
Given the KPI selector is open When the user toggles KPIs on or off Then the grid immediately updates to show only the selected KPIs in the chosen order And at least one KPI must remain visible; attempting to hide the last KPI shows a validation message and prevents the change And hidden KPIs are excluded from export
Export-Consistent 1:1 PDF Layout
Given the comparison grid is visible and scenario count is within 3–6 When the user exports to PDF Then the PDF reproduces the on-screen grid 1:1 including column order, widths, KPI order/visibility, baseline pin, delta arrows, and colors And no content is truncated; additional pages are added if needed while maintaining alignment And a visual diff shows ≤ 1% layout deviation in dimensions versus the rendered view
Keyboard Navigation and Screen Reader Semantics
Given a keyboard-only user When navigating the comparison view Then all interactive controls (reorder, KPI selector, export, tooltip triggers) are reachable in logical tab order with visible focus indicators And the grid uses ARIA table semantics; cells expose accessible names including scenario title, KPI label, value, and delta direction/value And color is not the only indicator of change; icons/text are present; contrast for text and indicators meets WCAG AA (≥ 4.5:1)
Plain‑Language Rationale Generator
"As a listing agent, I want a clear narrative explaining each scenario so that sellers understand the rationale behind the recommendation."
Description

Generate seller‑ready narratives for each scenario that synthesize sentiment trends and room‑level objections, reference relevant comps, and explain trade‑offs in clear, non‑jargon language. Include editable templates with tokens for metrics (e.g., {offer_probability_14d}) and guardrails to avoid overclaiming (disclaimers, confidence qualifiers). Provide inline editing, regenerate with adjustable tone/length, and change tracking so agents can fine‑tune before sharing.

Acceptance Criteria
Generate Plain-Language Narrative Per Scenario
Given a saved price-move scenario with sentiment, objections, and comps available When the agent generates a seller-ready rationale Then the narrative references at least the top 3 room-level objections with counts or percentages And includes week-over-week sentiment delta (positive/neutral/negative) with numeric change And cites 2–4 relevant comps including address, date (list/close), price, and price delta vs subject And explains trade-offs of the selected price move on showings and offer likelihood without using jargon And achieves Flesch-Kincaid Grade ≤ 8 and 250–350 words (Standard length preset) And headings and bullet points are used for clarity (Introduction, What We Heard, Comps, Trade-offs, Next Steps) And all numeric values are sourced from the current scenario context
Tokenized Template Rendering and Editing
Given an editable template containing tokens in {token_name} format (e.g., {offer_probability_14d}, {median_days_on_market}) When the agent generates a rationale Then 100% of tokens present in the template are resolved from the scenario metrics map And unresolved or unknown tokens are highlighted with an inline error and cannot be exported until resolved And value formatting rules are applied: currency in $#,### (no decimals), percentages with 1 decimal (e.g., 12.3%), dates as Mon DD, YYYY And the agent can edit text and tokens inline and save as a reusable custom template And saving preserves token syntax and validation rules
Guardrails, Disclaimers, and Confidence Qualifiers
Given any generated or edited rationale When the rationale is prepared for share or export Then a disclaimer block appears at the end with last-updated timestamp, data sources, and a non-guarantee statement And prohibited absolute-claim phrases (e.g., "guarantee", "will sell", "certain", "no risk") are blocked with an inline warning until removed And any forecasted metric includes a confidence qualifier mapped to thresholds: High (≥80%), Medium (60–79.9%), Low (<60%) And any metric older than 7 days is annotated with a freshness note And every claim referencing comps or metrics includes the specific metric/comps count used And the rationale passes a toxicity and bias screen with no flagged terms before export
Inline Editing with Track Changes and Version History
Given a generated rationale displayed in the editor When the agent makes edits Then insertions and deletions are visually tracked with author and timestamp metadata And the agent can Accept/Reject individual changes or Accept/Reject All And Version History captures each regenerate or major edit as a new version with diff And the agent can restore any prior version without data loss And exporting allows choosing Clean Copy (all changes accepted) or Markup (tracked changes visible) And undo/redo supports at least 20 consecutive operations per session
Regenerate with Adjustable Tone and Length Presets
Given tone presets (Neutral, Optimistic, Conservative) and length presets (Short 120–180 words, Standard 250–350, Detailed 450–600) When the agent selects tone and length and clicks Regenerate Then the new narrative adheres to the selected length range (±5%) And retains required sections and guardrails (disclaimer, confidence qualifiers) And avoids prohibited phrases regardless of tone And token values and numeric metrics remain accurate to the scenario And the regenerated text differs from the previous version with Jaccard similarity ≤ 0.70 And the prior version is preserved in Version History
Data Gaps and Fallback Behavior
Given some scenario metrics or tokens are missing (e.g., {offer_probability_14d}) When generating a rationale Then the system prompts to select a fallback (e.g., use {offer_probability_30d}) or omit the dependent sentence And no unresolved tokens remain in the final narrative or export And any fallback substitution is annotated inline (e.g., "30-day proxy used") and logged in Version History And if fewer than 2 valid comps are available, the Comps section uses qualitative context and labels the limitation And narratives never fabricate values for missing metrics
Scenario-Compare Consistency Across Multiple Price Moves
Given multiple saved price-move scenarios (e.g., −1%, −2.5%, −5%) When generating rationales for side-by-side comparison Then each scenario produces a rationale using the same template sections in the same order And all per-scenario metrics, comps, and objections reflect that scenario’s data (no cross-contamination) And each rationale is tagged with the scenario ID and price move and persists edits per scenario And toggling between scenarios restores the last edited version for that scenario And share/export bundles include all selected scenarios with consistent formatting and section alignment
Export-Ready Validation for Seller Sharing
Given a rationale marked Ready to Share When the agent exports to link or PDF Then tracked changes are removed in the exported copy unless Markup is explicitly selected And the document includes title, agent name, brokerage, property address, scenario label, timestamp, and disclaimer And links to comps resolve to viewable records or are rendered with full citation if links are unavailable And accessibility is met: heading hierarchy, alt text for charts, and minimum 12pt body text in PDF And export completes within 5 seconds for up to 3 scenarios
Seller‑Ready Share Links & PDF Export
"As a listing agent, I want to send a seller‑ready link or PDF so that we can align quickly without a meeting."
Description

Create secure, read‑only share links that render the comparison view and rationale with brokerage branding, property metadata, agent contact, and timestamp. Provide high‑fidelity PDF export optimized for print (proper pagination, margins, accessible text, vector charts) and mobile‑friendly viewing. Include link analytics (opens, last viewed) and notifications to the agent upon first view. Ensure the share view is immutable to recipients and reflects the scenario versions selected at share time.

Acceptance Criteria
Render Read‑Only Comparison Share View
Given an agent has selected multiple price‑move scenarios and entered a rationale in Scenario Compare When the agent generates a seller‑ready share link Then a recipient opening the link sees the comparison view with only the selected scenarios, side‑by‑side KPIs, and the provided rationale And the view includes brokerage branding (logo/theme), property metadata (address, MLS ID if present), agent name and contact info, and a share‑time timestamp And no edit controls are visible, and all interactive elements that would alter data are disabled And the content matches the on‑screen state at share time, including scenario ordering
Immutable Snapshot of Selected Scenarios
Given a seller‑ready share link was generated at time T When the agent later edits, reorders, adds, or deletes scenarios in Scenario Compare Then recipients opening the existing link continue to see the content as it existed at time T with no changes And recipients cannot toggle scenarios on/off or change parameters And the share view displays the share timestamp and indicates it is a read‑only snapshot
Secure Link Generation and Access
Given an agent generates a seller‑ready share link Then the link URL uses HTTPS and contains an unguessable token of at least 32 characters of entropy And the share view returns HTTP 403 for invalid or revoked tokens And the share page includes noindex, nofollow, and noarchive directives and excludes PII from the URL And rate limiting is applied to mitigate token‑spray attempts (e.g., 429 after threshold) And the agent can revoke the link, after which subsequent opens return 410 Gone
Link Analytics: Opens and Last Viewed
Given a seller‑ready share link exists When any recipient successfully loads the share view Then the link’s Opens count increases by 1 and Last Viewed updates to the request’s timestamp (UTC) And analytics are updated within 30 seconds of the open event And the agent can view Opens and Last Viewed in the share panel of the listing within the app
First‑View Notification to Agent
Given a seller‑ready share link has never been viewed When the first successful view occurs Then the agent receives an in‑app notification and an email within 60 seconds And the notification includes the property address, link name/ID, and the view timestamp (agent’s timezone) And subsequent views do not trigger additional first‑view notifications
High‑Fidelity PDF Export for Print
Given an agent requests a PDF export of the comparison share view When the PDF is generated Then the PDF mirrors the share content (selected scenarios, KPIs, rationale, branding, property metadata, agent contact, timestamp) And pages paginate without truncating charts/tables, with margins between 0.5" and 1.0" And charts render as vectors, text is selectable/searchable, and fonts are embedded And the PDF includes accessibility tags and reading order for screen readers And the agent can choose Letter (8.5x11 in) or A4 (210x297 mm), and the layout adapts accordingly
Mobile‑Friendly Share View
Given a recipient opens the share link on a mobile device ≤414 px width When the page loads over a simulated 4G connection Then the content is readable without horizontal scrolling, charts and tables are legible, and tap targets are at least 44x44 px And Largest Contentful Paint ≤2.5 s and Cumulative Layout Shift ≤0.1 And the header with branding and agent contact remains accessible without obscuring content
Approval & Comment Workflow
"As a seller, I want to approve or comment on a scenario so that decisions are captured and next steps can proceed."
Description

Allow sellers to approve a chosen scenario or request changes directly within the share view. Capture explicit approval with timestamp and identity confirmation, and reflect status (Proposed, Approved, Declined) back in the listing record. Enable threaded comments per scenario with @mentions and email notifications. Lock approved scenarios against accidental edits and record a change log to preserve decision history.

Acceptance Criteria
Seller Approves Scenario in Share View
Given a seller opens a shared Scenario Compare link with at least one Proposed scenario When the seller selects a scenario and clicks Approve and confirms in a modal Then the scenario status updates to Approved within 2 seconds and displays an "Approved" badge in the share view And the parent listing record status for that scenario reflects Approved within 5 seconds And a confirmation message displays to the seller with date/time (localized) of approval And the assigned agent receives an email notification containing listing, scenario name, and approval timestamp
Identity Confirmation and Approval Audit Record
Given a seller attempts to approve a scenario When the seller verifies identity via one-time code sent to the shared email/phone or via signed magic link Then the approval is recorded with verified identifier, IP, user agent, timezone, and ISO 8601 timestamp And the approval record includes listing ID, scenario ID, approver name, and action=Approved And the audit record is immutable and viewable in the listing's Decision History And approvals without completed identity verification are not persisted and the scenario remains Proposed
Request Changes / Decline with Reason
Given a seller opens a shared Scenario Compare link with at least one Proposed scenario When the seller clicks Request Changes on a scenario and submits a required reason (minimum 10 characters) Then the scenario status updates to Declined within 2 seconds with the reason attached And the reason is posted as the first comment in that scenario's thread, attributed to the seller And the listing record reflects Declined within 5 seconds And the assigned agent receives an email summarizing the reason with a deep link to the thread
Threaded Comments with @Mentions and Email Notifications
Given a user (seller or agent) views a scenario's comments When the user posts a comment containing @Firstname Lastname or @email of a participant Then the mention resolves to a known user and creates a threaded reply under the scenario And the mentioned user receives an email within 2 minutes with the comment text and a deep link to the scenario thread And duplicate notifications for the same mention are suppressed within a 10-minute window And the comment shows poster identity and localized timestamp and maintains reply threading up to 2 levels
Lock Approved Scenario Against Edits
Given a scenario is Approved When any user attempts to edit scenario inputs (e.g., price, concessions, timing) or delete the scenario Then the action is blocked with a read-only notice and a "Create Revision" option And only users with Agent or Broker-Owner role can create a new revision; the original remains locked And the share view and API both return the scenario as read-only And the lock state is recorded in the change log with actor and timestamp
Change Log and Decision History
Given a listing with scenario activity When a status change, comment, mention, or lock event occurs Then a change log entry is appended with actor, action, entity, previous value, new value, and ISO 8601 timestamp And entries are ordered newest-first, filterable by scenario and action type, and exportable to CSV And entries cannot be edited or deleted by end users And the Decision History is visible in both the listing record and the share view (read-only for sellers)
Status Sync Across Views and API
Given any status change for a scenario (Proposed, Approved, Declined) When the change is committed Then all first-party surfaces (listing record, scenario compare dashboard, share view) reflect the new status within 5 seconds And the public share link and generated PDF display the correct status and timestamp And the GET /scenarios API returns the updated status and a new ETag/version And stale cached responses older than 60 seconds are invalidated
Access Control & Audit Trail
"As a broker‑owner, I want secure and auditable sharing so that client data is protected and we meet compliance obligations."
Description

Provide link‑level security settings: expiration, revocation, optional passcode, and per‑recipient tokens to prevent unintended forwarding. Maintain an audit trail of share events, views, approvals, edits, and downloads with timestamps and actor identity. Support export of audit logs for compliance, minimal PII storage, and configurable retention aligned with brokerage policies.

Acceptance Criteria
Link Expiration Enforcement
Given a share link with an expiration timestamp set in UTC When a recipient opens the link before the expiration time Then the Scenario Compare content renders And an audit event "link_viewed" is recorded with occurred_at (UTC), link_id, recipient_token_id, and status "success" Given the same link When a recipient opens the link after the expiration timestamp Then access is denied with HTTP 410 and message "Link expired" And an audit event "access_blocked" is recorded with reason "expired" and link_id Given the link owner updates the expiration timestamp When the change is saved Then an audit event "expiration_changed" is recorded with old_value and new_value
Link Revocation Propagation
Given an active share link When the owner clicks "Revoke access" and confirms Then the link becomes invalid within 60 seconds And any new requests receive HTTP 403 with message "Link revoked" And all active sessions for that link are terminated within 60 seconds And an audit event "link_revoked" is recorded with actor_id (owner_id) and link_id
Passcode-Protected Link Access
Given a share link configured with a passcode When a recipient opens the link Then a passcode prompt is displayed and content is not rendered until a valid passcode is entered Given the passcode prompt When the recipient enters the correct passcode Then the content renders And an audit event "passcode_succeeded" is recorded with recipient_token_id and link_id Given the passcode prompt When the recipient enters an incorrect passcode Then access is denied without revealing content And an audit event "passcode_failed" is recorded with attempt_count incremented
Per-Recipient Token Binding to Prevent Forwarding
Given a per-recipient share link with a unique token T When T is first opened successfully Then T is bound to that browser via a secure cookie for 30 days And an audit event "token_bound" is recorded with recipient_token_id and link_id Given the same token T When T is opened from a different browser or device Then access is denied with HTTP 403 and message "Link not valid on this device" And an audit event "access_blocked" is recorded with reason "token_bound_mismatch" and link_id
Comprehensive Audit Trail Capture
Given share, view, approval, edit, download, passcode, expiration, and revocation actions occur on a Scenario Compare package When each action completes or fails Then an audit entry is created with fields: event_type, occurred_at (UTC ISO 8601), actor_type (owner|recipient|system), actor_id (user_id or recipient_token_id), object_type (link|scenario|pdf), object_id, and result (success|failure) And entries are immutable and ordered by occurred_at And new entries become visible in the owner's audit view within 5 seconds of the event
Audit Log Export for Compliance
Given a user with the "Export audit logs" permission When they request an export with a date range and optional filters (link_id, scenario_id, actor_id) Then a downloadable file is generated in CSV and JSON formats with a documented schema And timestamps are UTC ISO 8601 and a header row is present for CSV And for up to 100,000 rows, the export completes within 60 seconds And an audit event "audit_exported" is recorded with actor_id and filter parameters
PII Minimization and Configurable Retention
Given brokerage-level retention is configurable between 30 and 365 days (default 180) When retention is set to N days Then audit entries older than N days are purged within 24 hours And an audit event "retention_changed" is recorded with old_value and new_value Given PII minimization requirements When audit entries are stored or exported Then only pseudonymous identifiers (user_id, recipient_token_id) and non-sensitive metadata are included by default And no raw email addresses, phone numbers, or full IP addresses are persisted or exported unless explicitly allowed by brokerage policy

Smart Comps

Auto‑curates and weights comps by proximity, recency, bed/bath, style, and live showing sentiment. Transparently show weights, let users pin/unpin comps, and instantly recalc forecasts so pricing debates are grounded in trustworthy data.

Requirements

MLS/Public Data Integration for Comps
"As a listing agent, I want Smart Comps to automatically pull accurate comparable properties from trusted sources so that I start with a complete, reliable comp set without manual searching."
Description

Integrate with MLS/residential property data sources and public records to auto-retrieve candidate comparable listings and sales. Query by geospatial proximity (configurable radius), recency window, bed/bath count, property style/type, square footage, lot size, and status (active/pending/sold). Normalize and deduplicate records across sources, map attributes to a unified schema, and geocode addresses for distance calculations. Enforce brokerage/user data entitlements, handle rate limits, and cache results for performance. Provide graceful degradation if a source is unavailable and log data provenance for each comp so agents can trust the pool feeding Smart Comps.

Acceptance Criteria
Filtered Comp Retrieval by Proximity and Attributes
Given a subject property with latitude/longitude and filters for radius (miles), beds, baths, property style/type, square footage range, lot size range, status set (Active/Pending/Sold), and recency (days) When the comps retrieval service is invoked Then every returned comp has a great‑circle distance to the subject <= the specified radius And each returned comp satisfies all provided attribute filters; no comp outside constraints is included And omitting any filter results in no filtering for that attribute And including multiple statuses returns only comps whose status is within the specified set
Recency Window and Status Semantics
Given a recency window N (days) and current system time T (UTC) When retrieving comps Then Sold comps have close_date within N calendar days of T; Pending comps have pending_date within N days of T; Active comps have update_timestamp within N days of T And dates exactly N days prior to T are included; dates older than N days are excluded And comps missing the required date for their status are excluded with reason "missing_required_date" in response metadata
Cross-Source Normalization and Deduplication
Given candidate records for the same property from multiple sources (e.g., MLS and public records) When mapping to the unified schema and merging duplicates Then each consolidated comp contains one canonical record with fields: address_line1, city, state, postal_code, latitude, longitude, beds, baths, property_type, architectural_style, square_feet, lot_square_feet, status, price, list_date, pending_date, close_date, mls_id, apn, year_built, source_provenance And duplicates are merged when any of these keys match after normalization: normalized_full_address OR apn OR mls_id And conflicting values are resolved by deterministic precedence rules (e.g., MLS over public records) recorded in the comp’s provenance And no duplicate for the same property appears more than once in the returned set
Address Geocoding and Distance Calculations
Given comps with addresses lacking coordinates When geocoding is executed Then valid, mailable U.S. addresses receive latitude/longitude or are flagged "geocode_unavailable" And distances are computed using a great‑circle method and rounded to 0.01 miles for filtering and display And comps flagged "geocode_unavailable" are excluded from radius-based filtering and counted in response metadata as excluded_geocode_count
Data Entitlements Enforcement
Given a user with brokerage-scoped entitlements defining allowed MLS/data sources When retrieving comps Then only data from entitled sources are queried and returned And records from non-entitled sources are not queried and not present in results; restricted fields are not exposed And an access control decision (allow/deny) is logged per source with user_id, brokerage_id, and reason And attempts to explicitly include a disallowed source return HTTP 403 with a descriptive error without exposing restricted source identifiers
Rate Limits Handling and Caching
Given provider rate limits L requests per minute per credential When sustained load ranging from 0.9L to 1.2L is applied Then outbound calls remain <= 95% of L; overflow is queued with exponential backoff and jitter; no requests are dropped due to client-side throttling And identical queries (same subject and filters) within a configurable TTL (default 30 minutes) return from cache with 95th‑percentile latency <= 500 ms And cache entries are invalidated on TTL expiry or when underlying source data version changes And rate-limit and cache hit/miss metrics are emitted per source and are queryable
Graceful Degradation and Provenance Logging
Given one or more data sources are unavailable or erroring When retrieving comps Then results from available sources are returned with HTTP 200 and response metadata includes partial_results=true and a per-source status list (source, status_code, error_message) And a human-readable degradation message is included in response metadata And each comp includes provenance: list of contributing sources with source_name, source_record_id, fetched_at (UTC), last_updated_at (source), and fields_overridden And a request-scoped audit log is persisted with sources contacted, timings, rate-limit events, cache usage, and dedup decisions
Sentiment‑Weighted Scoring Model
"As a broker‑owner, I want comp scoring to reflect both property similarity and real‑time buyer sentiment so that price guidance mirrors actual market reception."
Description

Develop a scoring model that combines structural similarity (distance, recency, bed/bath match, style/type alignment, square‑foot variance) with live showing sentiment captured by TourEcho door‑hanger feedback. Normalize signals, apply configurable default weights, and compute a transparent per‑comp score. Support cold‑start behavior when sentiment is sparse (fallback to structural only) and guard against outliers. Expose model inputs/weights for auditing, and version the model so changes are traceable. Output includes overall score, factor contributions, and confidence to drive pricing guidance.

Acceptance Criteria
Compute Per-Comp Score with Normalized Factors
Given a comp with raw inputs for distance, recency, bed_bath_match, style_match, sqft_variance, and sentiment_signal When the model normalizes each factor Then each normalized factor value is within [0.0, 1.0] with 3-decimal precision and higher is better Given the default weights are loaded When the weights are applied Then each weight is >= 0 and the sum of all weights equals 1.0 +/- 0.001 Given normalized factors and weights When the overall score is computed Then the overall score is in [0, 100] with 2-decimal precision And each factor contribution equals normalized_value * weight * 100 +/- 0.01 And the sum of all factor contributions equals the overall score within +/- 0.01
Configurable Default Weights and Overrides
Given a weight configuration payload provided by an admin When the payload is validated Then all weights are numbers between 0.0 and 1.0 And the sum equals 1.0 +/- 0.001 And invalid payloads are rejected with HTTP 400 and a descriptive error code Given a valid global default and a market-level override exist When a scoring request is executed for that market Then the override weights are used and returned in the output as effective_weights Given two scoring runs with identical inputs and identical weight configuration versions When executed at different times Then the overall score and factor contributions are identical
Cold-Start Fallback When Sentiment Is Sparse
Given the input indicates fewer than 3 unique showing feedbacks in the last 30 days for the subject listing When computing the score Then the sentiment factor is disabled (weight set to 0.0), structural weights are renormalized to sum to 1.0, and sentiment_used=false is included in the output Given sentiment data becomes sufficient (>= 3 unique feedbacks in the last 30 days) When recomputing Then the sentiment factor is re-enabled and effective_weights include a non-zero sentiment weight Given a comp with no sentiment coverage When scoring Then the sentiment factor is disabled and structural weights are renormalized
Outlier Detection and Guardrails
Given a set of candidate comps and raw factor values When normalizing each factor Then raw inputs are winsorized at the 5th and 95th percentiles computed from the candidate set before scaling Given a raw input value beyond the 95th percentile When computing the normalized factor Then the normalized value equals the value at the 95th percentile (is clipped) Given a scoring request where a single raw input is increased to an extreme (10x the 95th percentile) When recomputing the score Then the absolute change in overall score is <= 2.0 points
Transparent Output: Inputs, Weights, Contributions
Given a scoring response When inspecting the response schema Then it contains fields: comp_id, overall_score, confidence, model_version, config_version, factors[] Given the factors[] array When inspecting each element Then each element contains: name, source, raw_value, normalized_value, weight, contribution, applied_transformations[] Given the scoring response When summing contributions Then the sum equals overall_score within +/- 0.01 Given the scoring response When validating data types Then overall_score is a number, confidence is a number in [0.0, 1.0], and version fields are non-empty strings
Model Versioning and Auditability
Given the model logic changes When the new version is deployed Then model_version is incremented following semantic versioning and exposed in all responses Given only weight configuration changes When the new configuration is applied Then config_version is incremented and exposed in all responses; model_version remains unchanged Given a scoring request When processed Then an audit record is persisted containing request_id, timestamp, model_version, config_version, input_hash, effective_weights_hash, and overall_score Given the same inputs and the same versions When recomputing within 90 days Then the outputs are reproducible and the audit record can be retrieved by request_id
Confidence Score Calculation and Semantics
Given a scoring response When inspecting confidence Then confidence is a numeric value in [0.0, 1.0] with 2-decimal precision Given an increase in the number of independent showing feedbacks for the subject listing When recomputing the same comps with unchanged weights Then confidence does not decrease Given the latest showing feedback is older than 30 days When computing the score Then confidence <= 0.60 Given no sentiment is used (cold-start fallback) When computing the score Then confidence <= 0.70 Given mapping to labels is requested When computing labels Then confidence_label is "high" for confidence > 0.80, "medium" for 0.50 to 0.80 inclusive of 0.50, and "low" for < 0.50
Weight Transparency Panel
"As a listing agent, I want to see how each factor contributed to a comp’s score so that I can clearly explain and defend pricing to my seller."
Description

Create a UI panel on each comp showing the score breakdown with per‑factor contributions (proximity, recency, bed/bath, style, size, sentiment). Include plain‑language explanations (e.g., “+12 for close distance”) and tooltips defining each factor. Display the current global default weights and any listing‑level overrides, with accessibility‑compliant visuals. Provide a “Why this comp” explainer and data provenance link. Ensure performance on web and mobile, and persist panel state per user session.

Acceptance Criteria
Per-Factor Score Breakdown with Plain-Language Explanations
Given a comp card is visible with a computed match score When the user expands the Weight Transparency Panel Then the panel lists proximity, recency, bed/bath, style, size, and sentiment with their numeric contributions and signed indicators And each factor displays a plain-language explanation aligned with its numeric contribution (e.g., "+12 for close distance") And the displayed total equals the sum of listed contributions within ±0.5 rounding tolerance And factors without data display "N/A" and contribute 0 to the total
Factor Definition Tooltips (Desktop and Mobile)
Given the Weight Transparency Panel is expanded When a desktop user hovers or keyboard-focuses the info icon for any factor Then a tooltip appears containing a one-sentence definition, the measurement unit, and how the factor influences score And the tooltip persists while focused and is dismissible with Escape And all six factors expose tooltips via keyboard focus When a mobile user taps the info icon Then the same tooltip content appears and is dismissible by tapping outside the tooltip
Display of Global Defaults vs Listing-Level Overrides
Given a listing has global default weights and may have listing-level overrides When the user expands the panel for any comp under that listing Then the panel shows Global default, Listing override (when present), and Effective weight for each factor And factors without overrides clearly display "Using global default" And effective weights sum to 1.0 ± 0.01 across all factors
Why This Comp Explainer and Data Provenance Link
Given the Weight Transparency Panel is expanded When the user opens the Why this comp explainer Then it lists the top three weighted reasons the comp was selected, each with a short sentence and the associated weight And a Data provenance link is visible with the named source When the user activates the Data provenance link Then it opens in a new tab to a reachable URL (HTTP 200) that describes the source dataset
Accessibility Compliance (WCAG 2.1 AA)
Given a keyboard-only user navigates the listing page When focus moves into the Weight Transparency Panel Then all interactive elements are reachable in a logical order with visible focus indicators And all text and icons meet contrast requirements (text ≥ 4.5:1; large text/icons ≥ 3:1) And screen readers announce labels, roles, values, and states for factors, weights, contributions, tooltips, and the Why this comp section And tooltip content is exposed via appropriate ARIA semantics and dismissible with Escape And no interaction requires pointer-only gestures or hover-only access
Performance and Responsive Behavior (Web and Mobile)
Given a listing with at least 8 comps and cached assets When a user expands the Weight Transparency Panel on a mid-tier mobile device over 4G (≈400 ms RTT, ≈1.6 Mbps) Then panel content renders within 300 ms at P95 And subsequent expand/collapse toggles render within 150 ms at P95 And additional JS/CSS payload attributable to the panel is ≤ 50 KB gzipped And main-thread blocking time added per expand is ≤ 50 ms at P95 And the layout adapts to screens ≥ 320 px width without horizontal scrolling
Panel State Persistence per User Session
Given a signed-in user views multiple comps in a listing When the user toggles the Weight Transparency Panel open or closed on any comp Then that open/closed state is preserved when navigating away and back within the same session And after a full page reload within 30 minutes, the prior state is restored And starting a new session (e.g., after 30 minutes of inactivity or sign-out) clears the saved state And state is tracked independently per comp within the listing
Pin/Unpin and Comp Overrides
"As a listing agent, I want to pin or exclude specific comps with notes so that the comp set reflects local context the model cannot fully capture."
Description

Allow users to pin comps to force include and unpin to exclude from calculations, regardless of score. Capture optional notes for each override and persist them at the listing level. Recalculate all outputs (recommended price, price band, predicted DOM) immediately on change. Respect role‑based permissions (agent, team lead, broker) and show visual badges for pinned/excluded comps. Maintain an audit trail of overrides with timestamps and users for accountability.

Acceptance Criteria
Pin forces inclusion and recalculates
Given a user with permission is viewing a listing’s comps and a comp has no override When the user sets the comp to Pinned (force include) Then the comp is included in valuation calculations regardless of score And the recommended price, price band, and predicted DOM recompute and display updated values within 2 seconds And a “Pinned” badge appears on the comp in list and detail views And the override persists for the listing after page refresh and re-login
Exclude override recalculates and badges
Given a user with permission is viewing a listing’s comps and a comp has no override When the user sets the comp to Excluded Then the comp is excluded from all valuation calculations regardless of score And the recommended price, price band, and predicted DOM recompute and display updated values within 2 seconds And an “Excluded” badge appears on the comp in list and detail views And the override persists for the listing after reload and across devices for the same user account
Override notes capture and persistence
Given a user is applying or editing an override (Pinned or Excluded) on a comp When the user enters an optional note up to 500 characters and saves the override Then the note is stored at the listing level linked to the comp and its current override state And the saved note is retrievable on subsequent visits and API reads And saving or editing a note does not change valuation outputs And empty note is allowed and stored as null without error
Role-based permissions enforcement
Given roles exist: Agent, Team Lead, Broker, and Read-Only When an Agent attempts to pin/exclude comps on a listing they own Then the action succeeds When a Team Lead attempts to pin/exclude comps on a listing within their team Then the action succeeds When a Broker attempts to pin/exclude comps on any listing in the brokerage Then the action succeeds When a Read-Only user attempts to pin/exclude or edit override notes Then override controls are hidden/disabled in the UI and any direct API attempt is rejected with HTTP 403 with no change to data
Audit trail for override actions
Given an override (create or change between Pinned and Excluded) is performed on a listing When the listing’s audit trail is viewed Then an immutable entry exists containing listing ID, comp ID, previous override state, new override state, note snapshot, user ID, user role, and ISO 8601 UTC timestamp And entries are ordered by timestamp descending And entries cannot be edited or deleted by any role And switching from Pinned to Excluded (or vice versa) creates a new entry capturing the change
Switching override states recalculates and logs
Given a comp currently has a Pinned override When the user switches the override to Excluded Then the recommended price, price band, and predicted DOM recompute and display updated values within 2 seconds And the “Excluded” badge replaces the “Pinned” badge immediately And a new audit entry records the previous and new states, note snapshot, user ID, and timestamp And the active override note for the comp reflects the latest saved note after the change
Instant Recalculation Engine
"As an agent in a pricing meeting, I want updates to apply instantly as I adjust comps so that discussions stay data‑driven without lag."
Description

Implement a reactive computation service that recalculates pricing recommendations, confidence, and forecast metrics in real time when comps, weights, or overrides change. Use incremental recompute and debouncing to deliver sub‑300ms perceived UI updates. Provide loading states, error surfacing, and last‑updated timestamps. Ensure deterministic results for the same inputs and expose a lightweight API for the Smart Comps UI. Log calculation versions for later review and export.

Acceptance Criteria
Real-time UI updates on comp and weight changes
Given the Smart Comps panel is open with initial metrics visible When the user pins/unpins a comp, adjusts a weight, or applies/removes an override Then pricing recommendation, confidence, and forecast metrics re-render within 300ms at p95 from input event to paint on reference hardware And the rendered values exactly match the engine output after defined rounding And no full page reload occurs; scroll position and focus are preserved And the UI remains responsive with First Input Delay < 50ms during the update window
Incremental recompute preserves unaffected metrics
Given a change affects only one comp or a single weight/override When recomputation runs Then only dependent aggregates are recalculated; unaffected metrics remain bit-identical (hash equality of serialized values) And unchanged comps are neither re-fetched nor re-scored And on a benchmark dataset of ≥50 comps, incremental recompute median duration is at least 40% faster than a full recompute
Debounced burst edits consolidate into a single recompute
Given the user performs multiple edits with ≤120ms gaps When edits cease for 120ms (debounce idle window) Then exactly one recomputation executes for the burst and includes the latest edit state And from the last edit to updated UI paint is ≤180ms at p95 And no more than one network request is issued per burst; superseded in-flight jobs are cancelled cleanly without UI flicker or errors
Deterministic outputs for identical inputs
Given identical inputs (comps, weights, overrides) in any order and the same calcVersion When recomputation executes on any node and locale Then outputs (pricing, confidence, forecasts) are bit-identical including rounding and formatting, and include the same calcVersion and outputHash And 100 repeated executions produce identical outputs and identical outputHash values And results are invariant to local time zone and locale settings
Compute lifecycle indicators: loading, errors, last-updated
Given a recompute is initiated When compute time exceeds 80ms Then a loading indicator appears within 50ms and is removed immediately upon completion; indicator has aria-live="polite" And if the compute fails, an inline error with user-friendly message, technical errorCode, and Retry appears; last-known-good values remain visible And on successful retry, the error clears and values update And on successful recompute, a Last updated timestamp (server time) is displayed, localized to user settings, and is not updated on failures
Lightweight API for Smart Comps recalculation
Given the UI submits POST /api/recalc with listingId, comps[], weights, overrides, and clientContext (payload ≤100KB) When the request is valid under normal load Then the API responds 200 with pricingRecommendation, confidence, forecasts[], calcVersion, inputHash, lastUpdatedTs within 200ms at p95 and 500ms at p99 And invalid payloads return 400 with a structured list of field errors; server errors return 5xx with errorCode; responses include Cache-Control: no-store And identical concurrent requests (same inputHash) are de-duplicated; the endpoint is backward compatible within a major calcVersion
Versioned calculation logging and export
Given any recompute attempt completes (success or failure) When logging is enabled Then a record is persisted with calcVersion, inputHash, redacted inputs snapshot, outputs snapshot (if success), duration, status, and errorCode (if failure) And records are queryable by listingId and date range and exportable to CSV and JSON; 10k-record export completes within 5 seconds And exported files include an integrity checksum; logs are retained ≥90 days and access is restricted to roles Agent, Broker-Owner, and Admin
Export & Shareable Summary
"As a listing agent, I want to share a clear comp summary with sellers showing how we reached the price so that they trust and align with the recommendation."
Description

Generate a shareable comp packet including selected comps, score breakdowns, pricing recommendation, sentiment highlights, and agent notes. Support PDF export and a secure web link with expiration, view tracking, and optional password. Include brokerage branding and listing details pulled from TourEcho. Ensure mobile‑friendly rendering and an accessible, seller‑friendly narrative that mirrors the transparency panel. Preserve a snapshot of the underlying data and model version at export time for consistency.

Acceptance Criteria
PDF Export Contains Required Elements and Branding
Given an agent has selected 1–20 comps and added agent notes in Smart Comps And brokerage branding and listing details exist in TourEcho When the agent clicks Export as PDF Then a PDF is generated within 10 seconds at the 95th percentile And the PDF includes: listing address and details from TourEcho, brokerage logo and contact block, only the selected comps with pinned status indicated, for each comp: address, beds/baths, style, proximity, recency, score breakdown with weights, and sentiment highlights, the pricing recommendation with rationale mirroring the transparency panel narrative, agent notes, export timestamp, model version, unique export ID, and page numbers And the PDF renders without missing fonts or layout overflow in Adobe Acrobat, Apple Preview, and Chrome built-in viewer
Secure Shareable Link with Expiration and Optional Password
Given an agent creates a shareable link and sets an expiration date/time with an optional password When the link is generated Then the agent receives a unique, unguessable URL with at least 128 bits of token entropy And recipients can view the summary prior to expiration without authentication if no password is set And recipients are prompted for the password when one is set; correct password grants access, incorrect attempts show a non-revealing error and do not disclose validity of the link And after the expiration time or when revoked by the agent, the link cannot be accessed and returns an "Expired or Revoked" message without leaking content And the page includes noindex,nofollow directives and omits it from any internal sitemap
View Tracking for Shareable Summary
Given a shareable link is active When a recipient opens the link Then the system records a view event with timestamp and anonymized session identifier And the agent dashboard displays total views, first viewed, and last viewed timestamps for the link And a unique viewers count increments once per session (based on the anonymized session identifier) within a 24-hour window And no recipient PII (name, email, phone, exact IP) is displayed to the agent And no view events are recorded after the link expires or is revoked
Mobile-Friendly Web Summary Rendering
Given a recipient opens the shareable summary on a mobile device (320–428 px width) or tablet (768–1024 px width) When the page loads on a 4G connection (400 ms RTT, 1.6 Mbps down) Then Largest Contentful Paint is ≤ 2.5 seconds and Time to Interactive is ≤ 4 seconds at the 75th percentile And text is legible at ≥ 16 px base size, charts and tables are responsive with horizontal scroll on overflow, and tap targets are ≥ 44×44 px And no layout shift exceeds a Cumulative Layout Shift score of 0.1 during load And critical content (pricing recommendation, comp list, sentiment highlights) is visible without horizontal scrolling
Accessibility of PDF and Web Summary
Given accessibility testing is performed on the shareable web page and the exported PDF When evaluated against WCAG 2.1 AA Then the web page supports full keyboard navigation with visible focus, meaningful landmarks, proper heading hierarchy, and sufficient color contrast (≥ 4.5:1 for text) And non-text content (icons, charts) has accessible names/alt text, and interactive elements announce roles and states to screen readers And automated scans (axe-core) report zero critical violations and no more than two minor issues, none blocking primary tasks And the PDF is a tagged PDF with correct reading order, heading tags, table structure, alt text for non-text content, selectable text (not image-only), and passes PAC 2021 with no errors
Snapshot Consistency and Model Versioning
Given an export (PDF or shareable link) is created When Smart Comps data or model weights are modified afterward Then the exported PDF and the shareable web view continue to display the original snapshot data, pricing recommendation, and score breakdown unchanged And the summary displays the export timestamp and model version used And re-exporting creates a new snapshot with a new unique export ID and updated model version if applicable And the agent can verify consistency by comparing the PDF and the web link for the same export ID and observing identical numbers and narratives
Error Handling, Messaging, and Retry
Given a transient failure occurs during PDF generation or link creation When the export action is attempted Then the user sees a clear, non-technical error message with a Retry option and a link to contact support And the system retries idempotently up to 3 times before failing, without creating duplicate links or partial PDFs And partial or failed exports are not accessible via any link and are not counted in view tracking And attempting to access an expired, revoked, or password-protected link without proper credentials shows a friendly explanatory page that does not reveal private content
Data Freshness & Recency Alerts
"As a listing agent, I want to be alerted when new or updated sales affect my comp set so that my pricing stays current throughout the listing period."
Description

Continuously monitor comp data freshness and trigger background refreshes on a schedule (e.g., hourly for actives, nightly for solds). Indicate last refresh time and flag comps that fall outside the configured recency window. Notify users when new sales enter the candidate pool or when existing comps materially change, with one‑click review to accept updates. Allow per‑listing recency window configuration and maintain a changelog of comp pool updates.

Acceptance Criteria
Scheduled Background Refreshes for Actives (Hourly) and Solds (Nightly)
Given active comps exist for a listing When the top of the hour occurs Then a refresh job starts within 5 minutes and updates active comp data Given sold comps exist for a listing When the nightly window at 02:00 local server time begins Then a refresh for sold comps completes by 02:30 with updated data Given multiple listings When refresh jobs run Then no more than one refresh job per listing runs concurrently Given a refresh job fails transiently When retries are enabled Then the job retries up to 3 times with exponential backoff and emits an error log on final failure Given a refresh completes When the job finishes Then the system records and persists a refresh timestamp per data type (actives, solds)
UI Displays Last Refresh Time and Staleness Indicators
Given a listing detail page is opened When comp data loads Then the UI shows last refreshed timestamps for actives and solds in the user's timezone with an absolute (MMM DD, YYYY HH:mm) and relative (e.g., "5m ago") label Given a refresh is in progress When the user is on the page Then a "Refreshing…" state appears and timestamps update within 60 seconds of completion Given a comp's source data is older than the listing's configured recency window When rendering the comp list/table Then the comp row displays an "Out of window" badge and a warning icon with a tooltip showing the last updated date Given no refresh has ever run for the listing When rendering the page Then the UI shows a "Not yet refreshed" placeholder
Per‑Listing Recency Window Configuration and Enforcement
Given a listing settings panel When the user opens Recency Settings Then they can set separate recency windows (in days) for Actives and Solds within the range 1–365 days Given the user enters invalid values (non‑numeric or out of range) When saving Then validation blocks the save and displays inline error messages Given the user saves Actives=7 and Solds=90 When returning to the listing Then the configuration persists and applies only to that listing Given new recency windows are saved When the comp pool is re‑evaluated Then comp flags ("in window"/"out of window") and counts update within 2 minutes
Notifications for New Sales Entering Candidate Pool
Given the nightly solds refresh completes When a new sale meets proximity, bed/bath, style criteria and falls within the listing's sold recency window Then the user receives an in‑app notification for that listing within 5 minutes Given email alerts are enabled When such a new sale is detected Then an email is sent containing count and top details (address, price, closing date) with a secure deep link to review Given multiple new sales in a single refresh run When notifications are generated Then they are batched into a single notification per listing per run Given the user clicks "Review" When the deep link opens Then the app shows a pre‑filtered review list of only the new candidate sales for that listing
Notifications for Material Changes to Existing Comps
Given an existing comp in the pool changes in source data When the change meets materiality thresholds (status change OR price delta ≥ 1% OR beds/baths/sqft changed) Then an in‑app notification is generated within 5 minutes Given multiple material changes for the same comp in one refresh When notifying Then the notification summarizes all fields changed and shows a diff on review Given the user has disabled material change alerts for a listing When changes occur Then no notification is sent but changes are reflected after refresh Given a material change is detected When the user opens the notification Then the comp detail shows before/after values and an "Accept update" action
One‑Click Review and Acceptance of Comp Updates
Given there are pending new or changed comps When the user opens the Review Updates panel Then items are grouped by type (New, Changed, Out of Window) with counts Given the user selects one or more items When they click "Accept" Then the comp pool updates immediately and the pricing forecast recalculates within 3 seconds Given the user chooses "Ignore for 24h" When they confirm Then the item is snoozed and will not re‑notify until the snooze expires or new changes occur Given updates are accepted When recalculation completes Then a success confirmation appears and the review count decreases accordingly
Append‑Only Changelog of Comp Pool Updates
Given any comp is added, updated, removed, or reclassified by system or user When the event occurs Then an append‑only log entry is created capturing listing ID, comp ID, event type, actor, timestamp, and field diffs (old→new) Given the changelog exists When the user opens History Then they can filter by date range, event type, and actor and export the results to CSV Given an entry was created When a user attempts to edit or delete it Then the system prevents modification and shows "Changelog is immutable" Given an organization retention policy of 365 days When entries exceed retention Then they are automatically purged and the purge action is itself logged

Fix vs Drop

Toggle top objections (e.g., carpet, lighting) as “resolved” to simulate the combined effect of minor fixes plus a price move. See which path delivers more showings and faster offers for less, guiding smarter spend before you cut price.

Requirements

Standardized Objection Catalog & NLP Mapping
"As a listing agent, I want free-text feedback automatically grouped into clear objection categories so that I can toggle potential fixes without parsing raw comments."
Description

Create and maintain a normalized taxonomy of top buyer objections (e.g., carpet, lighting, paint, layout) at room/area granularity and map incoming QR feedback to these categories using NLP with confidence scoring. Persist structured objection data to support toggling, trend analysis, and cross-listing benchmarks. Provide an admin interface for category management (merge/split/rename), localization support, and versioning. Ensure backward-compatible schema migrations and real-time processing so newly captured feedback immediately populates the Fix vs Drop workspace.

Acceptance Criteria
NLP Mapping to Standardized Objection Categories with Confidence
- Given feedback text that mentions one or more issues and a room/area, When processed by the NLP pipeline, Then it assigns up to 5 category_ids at room/area granularity with associated confidences in [0,1] and includes model_version and taxonomy_version metadata. - Given a category candidate with confidence >= 0.70, When results are returned, Then it is marked as primary; candidates with 0.30 <= confidence < 0.70 are marked as secondary; candidates < 0.30 are not returned. - Given identical input under the same model_version and taxonomy_version, When processed multiple times, Then the assigned categories, rooms, and confidences are identical. - Given input that yields no candidate >= 0.30, When processed, Then the record is stored as category_id = 'UNCATEGORIZED' with max_confidence reported and is excluded from Fix vs Drop toggles by default. - Given feedback that mentions multiple rooms, When processed, Then each category assignment is associated with the correct room/area label (e.g., "kitchen", "primary_bath").
Real-Time Population of Fix vs Drop Workspace
- Given a new QR feedback submission, When it is saved, Then the mapped objections appear in the Fix vs Drop workspace within 5 seconds at p95 and 10 seconds at p99 end-to-end. - Given a newly mapped objection, When the Fix vs Drop workspace is open, Then counts and objection lists update without manual refresh. - Given duplicate submissions with the same feedback_id, When processed, Then exactly one record is persisted and surfaced. - Given a transient processing failure, When it occurs, Then the system retries at least 3 times within 60 seconds and surfaces a non-blocking alert in logs; the UI shows a "Processing…" state.
Persistence and Backward-Compatible Schema Migrations
- Given the structured objection schema, When persisting records, Then the following fields are stored and non-null: listing_id, feedback_id, room_area, category_id, confidence, language, model_version, taxonomy_version, created_at. - Given a schema migration, When deployed, Then existing APIs remain backward-compatible (no required field removals; default values provided) and uptime remains >= 99.9% during the deployment window. - Given a migration that changes category structure, When completed, Then no data loss occurs (row counts before vs after within ±0.1%) and query results for historical time ranges remain consistent for the same taxonomy_version. - Given a failed migration, When detected, Then an automated rollback completes within 10 minutes and restores prior behavior.
Admin Category Management (Merge/Split/Rename) with Remapping
- Given an admin with appropriate permissions, When merging categories B into A, Then future mappings use A immediately and 99% of historical records are remapped within 30 minutes without downtime; the remaining 1% complete within 2 hours. - Given a split of category A into A1 and A2 with provided reclassification rules, When executed, Then historical records are reprocessed using those rules and any ambiguous records are flagged for manual review with a count report. - Given a rename of category A to "New Label", When saved, Then category_id remains unchanged and localized display names update across UI and APIs within 1 minute. - Given any admin change, When committed, Then an audit log entry records who, what, when, reason, and estimated impact counts; and a "Preview impact" report is available before commit.
Localization and Language Detection for Feedback Mapping
- Given feedback in English or Spanish, When processed, Then language detection identifies the language with >= 95% accuracy and maps to the same category_ids as equivalent English inputs. - Given unsupported languages, When detected, Then the system falls back to English models and marks language = "und" without failing the pipeline. - Given localized UIs for supported locales (en-US, es-ES), When displaying categories, Then labels appear in the user's locale and fall back to English if a translation is missing.
Trend Analysis and Cross-Listing Benchmarks Support
- Given persisted structured objection data, When querying the analytics endpoints, Then daily and weekly aggregates per listing and per category return within 1 second for up to 100 listings and 12 weeks. - Given raw and aggregated data for a date range, When compared, Then aggregated counts equal the deduplicated sum of underlying records for the same taxonomy_version. - Given multiple listings in the same market, When requesting cross-listing benchmarks, Then the API returns percentile ranks (p25, p50, p75) of objection prevalence per category. - Given Fix vs Drop workspace toggling needs, When aggregations are requested, Then they include category-level counts by room/area.
Versioned Taxonomy with Historical Consistency
- Given taxonomy updates (semantic version bump), When applied, Then new mappings include taxonomy_version = latest while existing records retain their original taxonomy_version. - Given a client query that specifies taxonomy_version, When executed, Then results are filtered and labeled consistently to that version, and category display names reflect that version. - Given a decision to reprocess historical records to the latest taxonomy, When initiated, Then a backfill job reprocesses at least 10k records/minute and records both prior_taxonomy_version and new_taxonomy_version for auditability. - Given an API consumer using prior category_ids, When the taxonomy changes, Then the API continues to accept prior IDs and maps them to current equivalents via a compatibility layer until deprecation EOL date is reached.
Fix Toggle and Price Drop Controls
"As a listing agent, I want to toggle fixes and adjust price in one place so that I can instantly see how different actions change buyer interest and time-to-offer."
Description

Build an interactive UI that lets users mark specific objection categories (and their room-level instances) as resolved and adjust a listing price change via slider or direct input. Support multi-select toggles, default states, reset/undo, keyboard shortcuts, and mobile responsiveness. Recalculate predicted outcomes in real time as controls change, with loading states, accessibility compliance, and state persistence per listing and per user. Guard against conflicting selections and validate price boundaries based on MLS constraints.

Acceptance Criteria
Multi-Select Objection Resolve Toggle
Given a listing with multiple objection categories, when the user selects any subset as Resolved, then each selected category displays a checked state and a “Resolved” indicator within 100 ms. Given no categories are selected, when the UI first loads, then all category toggles are Unresolved by default. Given one or more categories are selected, when the user deselects a category, then its state reverts to Unresolved and the indicator is removed within 100 ms. Given multiple categories are toggled in rapid succession, when interactions stop for 300 ms, then the UI reflects the final states without flicker and the resolved count equals the number selected. Given conflicting inputs are attempted on the same category, when the state is applied, then the system resolves to the last user action and only one state is shown.
Room-Level Toggle Sync with Category
Given a category has room-level instances, when all room-level instances are set to Resolved, then the category toggle displays a Checked state. Given a category has room-level instances, when at least one instance is Unresolved and at least one is Resolved, then the category toggle displays an Indeterminate state. Given a category toggle is set to Resolved, when the user applies it, then all its room-level instances are set to Resolved. Given a category toggle is set to Unresolved, when the user applies it, then all its room-level instances are set to Unresolved. Given a category toggle is Indeterminate, when the user clicks it once, then all room-level instances become Resolved; when the user clicks it again, then all room-level instances become Unresolved.
Price Adjustment via Slider and Direct Input
Given MLS constraints define min, max, and step, when the user drags the price slider, then the value snaps to the nearest valid step within [min, max]. Given a user types a value into the price input, when they press Enter or the field loses focus, then the value is validated, clamped to [min, max], and rounded to the nearest step; an inline error appears if the typed value cannot be coerced. Given both slider and input are present, when the user changes either, then the other reflects the new validated value immediately and the displayed amount is formatted per listing locale. Given an invalid character string is entered, when validation runs, then the field is highlighted, an accessible error message is announced, and the previous valid value is retained until correction. Given the user clears the input, when the field loses focus, then it reverts to the last valid value without triggering a recalculation.
Real-Time Recalculation with Loading State
Given any control value changes (toggle or price), when the change is committed, then a loading indicator appears within 100 ms and is shown in a non-blocking region. Given multiple changes occur within 300 ms, when debouncing is applied, then only one recalculation request is sent and it uses the latest state. Given a recalculation request succeeds, when results arrive, then KPIs and charts update within 150 ms, the loading indicator disappears, and an Updated timestamp is refreshed. Given a recalculation request fails, when an error occurs, then an error banner is displayed with a Retry action, the loading indicator is removed, and the prior KPIs remain visible. Given a new change is made while a recalculation is in flight, when responses return out of order, then stale results are discarded and the UI reflects the latest completed state.
Accessibility and Keyboard Support
Given the UI is navigated via keyboard, when tabbing through controls, then focus order is logical and a visible focus indicator appears on each interactive element. Given screen reader users, when controls are focused, then each toggle (including tri‑state), slider, text input, Reset, and Undo exposes correct roles, names, states, and ARIA attributes. Given the price slider is focused, when ArrowLeft/ArrowRight are pressed, then the value decreases/increases by one step; when Shift+Arrow is pressed, then it adjusts by 10 steps; when Home/End is pressed, then it jumps to min/max. Given objection toggles are focused, when Space or Enter is pressed, then the toggle activates; for tri‑state category toggles, the state cycles Indeterminate → Resolved → Unresolved. Given keyboard shortcuts are enabled, when Alt+R is pressed, then controls reset to defaults; when Alt+Z is pressed, then the last change is undone; when Alt+P is pressed, focus moves to the price input; when Alt+/ is pressed, a shortcuts help panel appears. Given WCAG 2.2 AA compliance is required, when audited, then color contrast, target sizes, focus visibility, and error messaging meet criteria.
State Persistence per Listing and User
Given a signed-in user adjusts toggles and price for Listing A, when they navigate away and return or refresh, then the same selections and price are restored for that user and listing. Given the same user opens Listing B, when they view it, then Listing B shows its own last-saved state without cross-contamination from Listing A. Given a different user views Listing A, when they open it, then they see their own last-saved state (or defaults if none) independent of other users. Given Reset is invoked, when the page is reloaded, then the defaults are shown and the persisted state reflects the reset. Given data storage is temporarily unavailable, when persistence fails, then the user is notified with a non-blocking warning and a local fallback state is used until persistence is restored.
Mobile Responsiveness and Touch
Given a device viewport width ≤ 768 px, when the controls render, then layout stacks vertically, text remains readable without zoom, and no horizontal scrolling occurs. Given a touch device, when the user taps objection toggles, then the state changes with visual feedback within 100 ms and hit targets are at least 44×44 px. Given the user drags the price slider with a finger, when they release, then the value snaps to the nearest valid step and the slider thumb is large enough for touch interaction. Given the on-screen keyboard opens for the price input, when the field is focused, then content does not jump and the input remains fully visible. Given a low-bandwidth mobile connection, when recalculation is triggered, then a loading state is shown and interactions remain responsive with no main-thread blocking over 100 ms.
Impact Prediction Model Integration
"As a data-informed agent, I want credible predictions of outcomes for different fix and price combinations so that I can choose the path most likely to produce faster, better offers."
Description

Integrate a prediction service that estimates incremental changes in weekly showings, offer probability, and time-to-offer given selected fix toggles and price adjustments. Combine TourEcho historical data, listing features, market comps, and time-on-market signals to model interaction effects between fixes and pricing. Provide confidence intervals, fallback heuristics when data is sparse, model versioning, and monitoring for drift and calibration. Expose a low-latency API with request/response schemas and feature flags for controlled rollout.

Acceptance Criteria
Low-latency Prediction API SLA
Given a valid request with <= 20 scenarios and payload size <= 10 KB When POST /v1/predict is invoked from an edge region under steady load Then p95 end-to-end latency <= 300 ms and p99 <= 600 ms measured by synthetic probes over 24 hours And success rate (HTTP 2xx) >= 99.9% over the last 30 days excluding client 4xx And each 2xx response includes correlationId and computationTimeMs <= 250
Response Schema, Confidence Intervals, and Versioning
Given a valid request specifying listingId, priceAdjustments, and fixToggles When a prediction is returned Then the response contains for each metric (weeklyShowingsDelta, offerProbabilityDelta, timeToOfferDeltaDays): value, ciLower, ciUpper, ciLevel And default ciLevel = 0.80; supported values are {0.50, 0.80, 0.90, 0.95}; invalid ciLevel yields HTTP 400 with code TE400_CI_LEVEL And the response includes modelVersion (semver), schemaVersion, and featureFlags[] And numeric values use units and rounding: showings to 1 decimal, percentages to 2 decimals, days to 2 decimals
Sparse-Data Fallback Heuristics
Given compsCount < 8 within the last 90 days in the micro-market OR feedbackForms < 5 for the listing When a prediction is requested Then the service sets fallbackStrategy = 'heuristic_v1' and lowConfidence = true And each metric's confidence interval width (ciUpper - ciLower) meets minimums: showings >= 3.0, offerProbability >= 0.10, timeToOffer >= 5.0 days And heuristic outputs meet the same latency SLA and include dataSparsityReason
Interaction Effects Decomposition
Given a base listing L and two toggles A (fix) and B (price change) When predictions are requested for scenarios: base, A, B, and A+B in a single batch Then the response for A+B includes decomposition.additive ~= delta(A) + delta(B) within tolerance 1e-6 and decomposition.interaction such that delta(A+B) = additive + interaction within tolerance 1e-6 And monotonicity: for price change p < 0, timeToOfferDeltaDays <= 0 and offerProbabilityDelta >= 0; for p > 0, timeToOfferDeltaDays >= 0 and offerProbabilityDelta <= 0
Feature-Flagged Controlled Rollout
Given a tenant not in the rollout cohort for flag impactModelV2 When they call the API Then responses are served by baseline modelVersion v1.* and featureFlags includes impactModelV2 = false And when the tenant is added to the cohort, within 5 minutes subsequent calls return modelVersion v2.* and impactModelV2 = true And in non-prod, header X-TE-Flag-ImpactModelV2 = true enables override for allowlisted users; non-allowlisted attempts return 403 TE403_FLAG_FORBIDDEN
Drift and Calibration Monitoring
Given at least 1000 predictions collected in the last 30 days When daily monitoring jobs execute Then Population Stability Index (PSI) across the top 10 features < 0.2; PSI >= 0.2 triggers a warning alert within 15 minutes; PSI >= 0.3 triggers automatic rollback to the previous stable model within 30 minutes And for offerProbabilityDelta, Brier score <= 0.19; calibration slope in [0.9, 1.1] and intercept in [-0.05, 0.05] And empirical coverage of 80% confidence intervals is between 72% and 88%; out-of-bounds triggers a calibration alert
Request Validation and Error Handling
Given a request missing required fields or containing invalid types/ranges When the API is called Then it returns HTTP 400 with structured errors [{code, field, message}] and correlationId; unknown fields are ignored and listed in warnings[] And semantic violations (e.g., price change outside [-20%, +20%], unsupported fix key) return 422 TE422_DOMAIN And model unavailability returns 503 TE503_MODEL_UNAVAILABLE with Retry-After; requests with Idempotency-Key are deduplicated server-side
Cost & Timeline Inputs with ROI Calculator
"As a seller’s agent, I want to input realistic fix costs and timelines so that I can compare their ROI against a price reduction and recommend the most efficient path."
Description

Allow agents to enter estimated cost ranges and completion timelines for each fix (with optional vendor presets). Calculate and display ROI metrics such as cost per additional showing, cost per day saved, net value versus equivalent price drop, and expected break-even period. Handle incomplete or uncertain inputs, show sensitivity bounds, and include disclaimers. Persist assumptions per listing and expose them to scenario comparison and shareable reports.

Acceptance Criteria
Enter Cost Range and Timeline with Vendor Preset
Given an agent is editing a fix for a listing When the agent selects a vendor preset for that fix Then cost_min and cost_max fields autofill with the preset range And duration_min and duration_max fields autofill with the preset range And the agent can override any autofilled value before saving And validation blocks save if any value is negative or if min > max for cost or duration And on successful save, a confirmation message indicates "Assumptions saved"
ROI Metrics Computation from Fix Inputs
Given a fix has saved cost and duration inputs and the scenario has predicted additional showings and days saved When the agent opens the ROI panel Then the system displays: cost per additional showing, cost per day saved, net value versus equivalent price drop, and expected break-even period And each metric shows numeric values with units (currency, days, count) And cost per day saved displays "N/A" if predicted days saved ≤ 0 And values update within the ROI panel immediately after any input change and on scenario switch
Handling Incomplete or Uncertain Inputs
Given a fix has missing cost_min/cost_max or duration_min/duration_max When the agent attempts to save Then the system allows saving partial inputs And any ROI metric requiring missing inputs shows "N/A" with an info tooltip explaining what is missing And if a single value is entered for cost or duration, it is treated as both min and max And the UI highlights required fields to complete full ROI calculations
Sensitivity Bounds and Assumption Display
Given cost and/or duration are entered as ranges When ROI metrics are computed Then each displayed metric shows a midpoint estimate and a min–max range derived from the input bounds And a tooltip explains that ranges reflect input bounds and assumptions And ranges and midpoint values recompute and refresh immediately upon any input change
Disclaimers Visibility and Content
Given the agent views the ROI panel or generates a shareable report When the content renders Then a disclaimer is visible near the ROI metrics stating: "Estimates only. Not financial advice. Actual outcomes may vary." And the disclaimer includes the last-saved timestamp and listing identifier And the disclaimer appears in exported/shareable outputs without alteration
Persist Assumptions Per Listing
Given an agent saves cost and duration assumptions for a fix on a listing When the agent reloads the listing, switches devices, or returns later while logged in Then the previously saved assumptions auto-load for that listing and fix And edits overwrite the prior assumptions with an updated timestamp And assumptions saved for one listing do not affect other listings
Expose Assumptions to Scenario Comparison and Reports
Given the agent opens Fix vs Drop scenario comparison or creates a shareable report When scenarios are computed Then ROI metrics and the underlying assumptions for each fix are included in the comparison view And the shareable report renders the same metrics and assumptions snapshot seen by the agent at generation time And modifying assumptions updates the comparison view immediately and requires regenerating the report to reflect changes
Scenario Comparison, Save, and Share
"As an agent, I want to save and share side-by-side scenarios with my seller so that we can align on a plan using clear, client-ready evidence."
Description

Provide side-by-side scenario cards that summarize assumptions (selected fixes, price change, costs, timelines) and outcomes (showings, offer likelihood, time-to-offer, ROI, confidence). Enable saving, naming, duplicating, and restoring scenarios per listing. Generate secure shareable links and branded PDFs suitable for clients, with access tracking and expiration controls. Include timestamps, MLS ID, and agent/broker branding in exports for credibility and record-keeping.

Acceptance Criteria
Compare Scenario Cards: Assumptions and Outcomes
Given a listing with at least two saved scenarios, when the user opens the Scenario Comparison view, then at least two scenario cards display side-by-side and each card shows Assumptions (Selected fixes, Price change, Estimated costs, Timeline) and Outcomes (Projected showings, Offer likelihood, Time-to-offer, ROI, Confidence). Given the user edits an assumption in Scenario A, when the change is saved, then Scenario A outcomes recalculate and display and Scenario B values remain unchanged. Given there is only one saved scenario, when the user opens the Scenario Comparison view, then a prompt to add or duplicate a scenario is shown and the single scenario card is displayed.
Save and Name Scenarios Per Listing
Given a listing, when the user creates a new scenario and enters a non-empty name, then Save is enabled and the scenario can be saved. Given Save is clicked, then the scenario persists to the listing and appears in the scenario list after page reload. Given multiple listings, when viewing Listing A, then only scenarios created for Listing A are listed and scenarios from other listings are not shown. Given two scenarios share the same name on a listing, when listed, then both are displayed with distinct timestamps to differentiate them.
Duplicate and Restore Scenario
Given an existing scenario, when the user selects Duplicate, then a new scenario is created with the same assumptions, a default name 'Copy of {Original Name}', and outcomes are recalculated for the copy. Given a scenario in the list, when the user selects Restore, then the scenario's assumptions become the current working state and the UI indicates the active scenario. Given the user restores a scenario, when returning to the comparison view, then the restored scenario appears with its updated timestamp.
Secure Shareable Link with Expiration and Revocation
Given a saved scenario, when the user generates a shareable link, then the link uses HTTPS and contains an unguessable token of at least 32 characters with no PII. Given an expiration date/time is set, when current time is after expiration, then accessing the link returns Access Denied and the attempt is logged. Given a share link exists, when the owner revokes it, then the link immediately becomes invalid and returns Access Denied on access. Given a new share link is regenerated for the same scenario, then any previously generated links for that scenario are invalidated.
Branded PDF Export with Required Metadata
Given a saved scenario, when the user exports to PDF, then the PDF includes agent/broker branding (logo and name), agent contact, MLS ID, export timestamp (UTC), scenario name, listing identifier, assumptions (selected fixes, price change, costs, timeline), and outcomes (projected showings, offer likelihood, time-to-offer, ROI, confidence). Given the user exports to PDF from a branded brokerage account, then the brokerage branding appears in the header or cover of the PDF. Given the PDF is opened, then text is selectable and numeric values match those shown in the app at the moment of export.
Access Tracking for Shared Views and Downloads
Given a share link is opened, when the page loads, then a view event is recorded with timestamp, scenario reference, and user-agent; if viewer identity is available via authenticated portal, it is associated. Given the PDF is downloaded via the share page, when the download starts, then a download event is recorded with timestamp for that scenario. Given the owner opens Share Analytics for the scenario, then total views, total downloads, and last accessed timestamp are displayed.
Confidence and ROI Metrics Display and Update
Given a scenario is visible in comparison, when the cards render, then each card displays ROI (%) and Confidence (%) with values between 0 and 100 and includes a tooltip that describes each metric. Given the user modifies any assumption that affects projections, when the scenario is saved, then ROI and Confidence recalculate and update on the card. Given ROI or Confidence cannot be computed, when the card renders, then an 'N/A' indicator is displayed with a tooltip explaining why.
Explainability, Warnings, and Guardrails
"As a cautious agent, I want transparency about why a scenario performs a certain way and where uncertainty lies so that I can set appropriate expectations with my client."
Description

Surface the primary drivers behind predicted changes (e.g., price elasticity, fix impact learned from comps), indicate data recency and confidence levels, and warn when predictions rely on sparse data or out-of-range inputs. Provide guardrails like minimum/maximum allowable price changes, fix feasibility flags, and sensitivity toggles. Log assumptions and model version used for auditability and include contextual disclaimers to set expectations.

Acceptance Criteria
Driver Breakdown Explains Predicted Changes
Given a listing where the user toggles one or more fixes and/or enters a price change on Fix vs Drop When the prediction updates Then the system displays a Drivers panel with the top 3–7 drivers ranked by absolute contribution to the predicted change And each driver shows its contribution as: (a) absolute delta in expected showings/week, (b) delta in expected days-to-offer, and (c) percent of total effect And the sum of displayed driver percentages is within ±5% of the total predicted effect And each driver includes a source label (e.g., price elasticity, fix impact from comps, seasonality)
Data Recency & Confidence Indicators
Given prediction results are shown Then the UI shows: model version (name+hash), last training date, and data windows used (comps and feedback) with recency in days And each driver displays a confidence badge (High/Med/Low) and a 95% confidence interval for its contribution And an overall 95% confidence band is shown for the headline prediction (showings/week and days-to-offer) And if any primary datasource recency exceeds 90 days, a visible "Stale data" indicator appears with tooltip explaining impact
Sparse Data & Out-of-Range Warnings
Given the user runs a simulation When available comparable listings count < 5 OR unique at-the-door feedback entries < 10 Then a "Sparse data" warning banner appears and the event is recorded in the audit log When any input value is outside the model’s 1st–99th training percentiles Then an "Out-of-range input" warning appears naming the fields and showing the nearest allowable range And warnings persist until inputs change or are dismissed and are included in exported/printed reports
Price Change Guardrails
Given the price input control is visible Then the allowed range defaults to -15% to +10% of current list price and is enforced client- and server-side When a value outside this range is entered Then the Apply action is disabled and an inline message explains the allowed range with a market-rationale tooltip And admins can override guardrails per market; changes are versioned and audited
Fix Feasibility Flags & Cost Bands
Given objections extracted for the listing (e.g., carpet, lighting, layout) Then each fix toggle displays a feasibility flag: Likely, Conditional, or Not Feasible And each fix displays an estimated cost range (P50 and P90) and estimated downtime in days with last-updated date When a Not Feasible fix is toggled on Then a confirmation dialog explains constraints and requires explicit confirmation to include it in the simulation
Sensitivity Toggles & What-If Controls
Given the prediction panel is visible Then the user can adjust sensitivity for price elasticity and fix-impact multipliers within ±20% of defaults When sensitivity is changed Then the headline prediction, driver contributions, and confidence bands recompute within 500 ms on average and 1 s at p95 And a Reset to Defaults control restores baseline parameters And Low/Med/High presets match -20%/0%/+20% multipliers respectively
Auditability & Disclaimers
Given any simulation run (including sensitivity changes) Then the system logs: timestamp, user ID, listing ID, run ID, model name and version, input parameters (price delta, fix toggles, sensitivities), data snapshot IDs, warnings present, and disclaimer version And audit records are immutable, searchable by listing and date range, and exportable to CSV And a contextual disclaimer appears adjacent to predictions, requires one-time "Understood" acknowledgement per user per listing, and is included on share/print artifacts And if warnings exist, the disclaimer expands to note elevated uncertainty
Permissions, Activity Log, and Compliance
"As a broker-owner, I want controlled access and a clear audit trail of decisions so that our team stays compliant and we can review how pricing and fix recommendations were made."
Description

Implement role-based access so only listing-side users and approved broker roles can create, view, or share scenarios. Maintain an immutable activity log of scenario creation, edits, shares, and exports with user, timestamp, and before/after details. Ensure PII minimization in exported artifacts and provide data retention settings aligned with brokerage compliance policies.

Acceptance Criteria
RBAC: Scenario Access Control for Fix vs Drop
Given a user without a listing-side or approved broker role attempts to create a Fix vs Drop scenario for a listing When they submit the creation request Then the system returns 403 Forbidden and no scenario is created Given a user with a listing-side agent role or an approved broker role assigned to the listing When they create a Fix vs Drop scenario Then the scenario is created and is visible only to users with those roles on the same listing Given a user with an approved role not assigned to the listing When they attempt to view, edit, or share the scenario Then the system returns 403 Forbidden and no scenario metadata is revealed Given an approved user attempts to share a scenario to a recipient outside listing-side/approved broker roles When the share is submitted Then the action is blocked with a validation error and no share link or artifact is produced
Audit Trail: Immutable Activity Log for Scenario Lifecycle
Given any scenario event of type create, edit, share, or export completes successfully When the operation finishes Then exactly one audit entry is appended containing: event_type, scenario_id, user_id, user_role, timestamp (UTC ISO 8601), before_state, after_state, and request_id Given any user or admin attempts to modify or delete an existing audit entry via any API or UI When the request is made Then the system rejects the request (405 Method Not Allowed or 403 Forbidden) and the entry remains unchanged Given a subsequent correction or change is made to a scenario When the edit is saved Then a new audit entry is appended and prior entries remain unchanged
Audit Entry Content Completeness by Event Type
Given a scenario is created When the audit entry is written Then before_state is null and after_state contains a full snapshot of scenario parameters and access scope Given a scenario is edited When the audit entry is written Then before_state includes only fields changed with their prior values and after_state includes the new values for those fields Given a scenario is shared When the audit entry is written Then it includes share_target_type (user, role), target_identifier, and permissions (view/edit) Given a scenario is exported When the audit entry is written Then it includes artifact_type (PDF, CSV, JSON), export_scope, and artifact_checksum
PII Minimization in Scenario Exports
Given an approved user exports a scenario (PDF, CSV, or JSON) When the file is generated Then the export contains only listing identifiers, scenario metadata, aggregated feedback/objection summaries, and simulated impacts, and excludes personal identifiers (person names, emails, phone numbers), device IDs, IP addresses, and raw free-text feedback Given any free-text content is included in the export When PII patterns (email, phone, SSN, IP) are detected Then the content is redacted in-place with standardized tokens (e.g., [REDACTED_EMAIL]) and a redaction_count is added to export metadata Given the exported file is scanned with the predefined PII regex set When the scan completes Then zero PII matches are found
Brokerage-Level Data Retention Configuration and Enforcement
Given a brokerage admin sets retention periods for scenarios, audit logs, and exports When the settings are saved Then the values persist at brokerage scope and are versioned with who/when in an admin audit record Given stored data exceeds the configured retention period When the nightly retention job runs Then scenarios, audit logs, and exports older than the policy are purged within 24 hours, and a purge_summary audit event is appended with date range and counts Given a user attempts to manually delete audit entries or exports before retention expiry When the request is made Then the system denies the request and no records are removed Given the admin updates retention to a longer duration When saved Then no immediate purge occurs and future purges respect the new horizon
Authorization and Audit for Sharing and Export Access
Given an approved user generates a share or export for a scenario When a recipient attempts to access the shared content or download the export Then access is granted only to listing-side users and approved broker roles on the same listing; all other access attempts return 403 Forbidden Given any access attempt (successful or failed) occurs for a share or export When the request is processed Then an audit entry is appended capturing scenario_id, user_id (or unauthenticated marker), timestamp, action (share_access or export_download), and result (success/failure) Given a share or export is revoked by an authorized user When revocation is saved Then subsequent access attempts fail with 403 and a revocation audit entry is appended

Confidence Bands

Every prediction displays low/likely/high ranges with a data sufficiency score and what’s driving uncertainty. Get actionable tips to tighten bands—collect more feedback, add recent comps—so you set expectations and avoid seller surprises.

Requirements

Quantile Confidence Engine
"As a listing agent, I want each prediction to include clear low/likely/high ranges so that I can set expectations with sellers and manage risk."
Description

Compute and serve calibrated low/likely/high ranges for each TourEcho prediction (e.g., days-on-market, price reduction probability, offer likelihood) using quantile regression or bootstrapped ensembles with market/price-tier/property-type segmentation and recency weighting. Enforce monotonic consistency, minimum width bounds, and smoothing to prevent jitter. Integrate into the existing prediction pipeline with idempotent, versioned endpoints, result caching, and storage of intervals with timestamps for trend analysis. Outcome: accurate, stable confidence bands that reflect real uncertainty per listing.

Acceptance Criteria
Calibrated Quantile Bands per Prediction Target
- For each target (days_on_market, price_reduction_prob, offer_likelihood), compute low/likely/high as calibrated 10th/50th/90th percentiles or bootstrapped equivalents. - On a 90-day rolling holdout across active markets, empirical coverage for low/high bands within ±3 percentage points of target: P(y ≤ low) ∈ [7%,13%], P(y ≥ high) ∈ [7%,13%]. - Median absolute error for the predicted median (likely) improves or matches baseline by ≥ 5% on the same holdout. - Monotonicity enforced: low ≤ likely ≤ high for 100% of responses. - Minimum width: days_on_market width ≥ 2 days; price_reduction_prob width ≥ 4 percentage points; offer_likelihood width ≥ 5 percentage points. - Smoothing: For a listing with unchanged feature_hash, day-over-day change in each bound ≤ max(10% of prior width, 1 day for DOM, 2 pp for probabilities). - Fallback: If sufficiency_score < 0.30, serve widened default bands using 5th/50th/95th priors and set insufficient_data = true.
Segmented and Recency-Weighted Training/Data Pipeline
- Training and inference segmented by (market_area, price_tier, property_type); 100% of listings are assigned a valid segment tuple. - Recency weighting: Sample weights decay exponentially with half-life 90 days ± 5 days; verified by unit tests comparing weights at t and t+90d (ratio ≈ 0.5 ± 0.05). - No cross-segment leakage: Samples from outside a segment contribute weight = 0; enforced via schema and data validation tests. - Segment backoff: If a segment has < 300 samples in last 12 months, backoff to parent segment per documented rule; event logged with backoff_reason. - Offline evaluation reported per segment; segments failing coverage tolerance (±5 pp) are blocked from deploy until remediated.
Idempotent, Versioned Confidence Endpoint
- Expose POST /confidence/v{major.minor}/bands with required idempotency-key; OpenAPI spec published and validated. - Idempotency: Repeating the same request body + idempotency-key within 24h returns identical response hash and X-Idempotent: true. - Versioning: Requests without explicit version are rejected with 400; n-1 version returns Deprecation headers; only whitelisted versions allowed. - AuthZ enforced: Tokens require scope confidence:read; unauthorized/forbidden requests respond 401/403 respectively. - Performance: p95 latency ≤ 1200 ms for cold computes; p95 ≤ 300 ms for cached responses; error rate ≤ 0.5%. - Observability: Trace IDs propagated; metrics exported for latency, hit_rate, coverage_drift, and error_rate.
Confidence Result Caching and Invalidation
- Cache key = (listing_id, target, model_version, feature_hash); cache hit rate ≥ 80% after 1h warmup in canary. - TTL default 24h; soft refresh via background refresh; hard invalidation triggered within 2 minutes on new feedback, new comps, price change, or feature_hash change. - Cached response byte-for-byte matches fresh compute when feature_hash unchanged; validated via checksum comparator. - Responses include computed_at and expires_at; computed_at strictly increases across recomputes for the same feature_hash.
Data Sufficiency Score and Uncertainty Drivers
- Each response includes sufficiency_score ∈ [0,1] with 2-decimal precision; calculation method unit-tested and documented. - If sufficiency_score < 0.70, include 1–3 actionable_tip items with category ∈ {feedback, comps, photos, listing_data}; at least one present. - Uncertainty drivers array includes 1–3 entries with fields {driver, direction ∈ {up, down}, weight ∈ [0,1]}; labels from controlled vocabulary. - Copy limits enforced: tip text ≤ 140 characters; driver labels ≤ 40 characters. - Controlled replay dataset shows that increasing sufficiency_score by ≥ 0.20 via added data reduces band width by ≥ 10% on median.
Interval Storage and Trend History API
- Persist {listing_id, target, low, likely, high, sufficiency_score, model_version, feature_hash, computed_at} on every compute; write success rate ≥ 99.99%. - Upsert keyed by (listing_id, target, feature_hash) prevents duplicate rows; verified by idempotent writes test. - Provide GET /confidence/v{version}/history?listing_id=&target=&limit= that returns most recent N (default 30) records ordered by computed_at desc with p95 ≤ 400 ms. - Data integrity: Monotonicity holds per record; no history gaps > 48h during active listing period unless no recompute event occurred (marked by no_event flag). - Daily export to parquet with schema versioning; validation checksums pass at 100% for produced partitions.
Data Sufficiency Scoring
"As a broker-owner, I want a clear data sufficiency score so that I understand whether to trust the bands and what to improve."
Description

Calculate a 0–100 data sufficiency score per listing and prediction using number and recency of QR feedback responses, coverage and age of comps, completeness of listing metadata, and model agreement metrics. Map score to labeled tiers (Low/Moderate/High) that directly influence band width and show contextual warnings for sparse data. Provide fallbacks (priors) and time decay when data gets stale. Expose score and components via API, persist for audit, and surface in UI and exports. Outcome: transparent, actionable measure of how much data supports each band.

Acceptance Criteria
Per-Listing and Per-Prediction Sufficiency Score Computation
Given defined inputs (QR feedback, comps, metadata completeness, model agreement) When the sufficiency score is computed Then a score integer in [0,100] is produced. Given identical inputs When the score is recomputed Then the score matches the previous value exactly. Given a listing with multiple prediction types (e.g., price, days-on-market) When scores are computed Then each prediction has its own distinct score value and identifier. Given a completed computation When persisted Then the record includes listing_id, prediction_type, score, components, computed_at (UTC), and version fields.
Tier Mapping and Confidence Band Width Influence
Given a computed score When mapping to tier Then 0–39 => Low, 40–69 => Moderate, 70–100 => High. Given a prediction with identical model variance When tier is Low, Moderate, or High Then confidence band width scales by 2.0x, 1.2x, and 1.0x respectively. Given a score crosses a tier boundary When the prediction is refreshed Then the tier label and band width update within 1 second of recomputation.
Sparse Data Warnings and Actionable Tips
Given score < 40 OR feedback_count_last_14d < 5 OR comps_coverage_pct < 60 OR metadata_completeness_pct < 80 OR model_agreement_pct < 60 When the prediction is displayed Then a 'Sparse data' warning is shown listing the failing drivers. Given at least one driver is failing When the warning is shown Then at least one actionable tip is included per driver (e.g., 'Collect 5+ new QR responses', 'Add 2 recent comps <30d', 'Complete missing metadata fields'). Given the sufficiency API is called When the prediction has warnings Then response includes warning=true with drivers[] and tips[] matching the UI.
Time Decay and Priors Fallback
Given feedback older than 30 days When computing effective feedback contribution Then each response weight is ≤ 0.5 at 30 days and decays to 0 by 90 days. Given no new data for 30 days When the daily recomputation runs at 02:00 local Then the score decreases or remains the same; it does not increase due to decay alone. Given effective_feedback_count < 3 OR effective_comps_coverage_pct < 30 When computing the score Then using_priors=true, tier=Low, and Low tier band scaling is applied. Given using_priors=true When API/UI responses are returned Then a 'Using priors' flag is present.
API Exposure of Score and Components
Given a valid token with scope read:sufficiency When GET /v1/listings/{id}/predictions/{type}/sufficiency is called Then 200 is returned within 200 ms p95 with JSON containing: score, tier, components {feedback_count, feedback_recency_days, comps_coverage_pct, comps_median_age_days, metadata_completeness_pct, model_agreement_pct, time_decay_factor}, flags {warning, using_priors}, drivers[], tips[], computed_at, model_version, formula_version. Given a non-existent listing id When the endpoint is called Then 404 is returned with error code LISTING_NOT_FOUND. Given missing or invalid auth When the endpoint is called Then 401 is returned.
Persistence and Auditability
Given any score computation When results are saved Then an immutable record is created with computation_id, listing_id, prediction_type, inputs_snapshot, score, tier, flags, drivers, tips, model_version, formula_version, computed_at (UTC), triggered_by. Given a history request When GET /v1/listings/{id}/predictions/{type}/sufficiency/history?from=...&to=... is called Then matching records are returned ordered by computed_at desc. Given an attempt to modify a historical sufficiency record When a write/update API is invoked Then the request is rejected with 405 and no stored record is altered. Given data retention policy of ≥365 days When 365 days elapse Then historical records remain queryable.
UI and Export Surfacing
Given a prediction detail view on desktop and mobile When loaded Then the sufficiency score (0–100), tier label (Low/Moderate/High), and color code (red/amber/green) are visible adjacent to the confidence bands. Given the info tooltip is opened When hovered/clicked Then a breakdown shows component values and uncertainty drivers with actionable tips. Given a CSV export is generated When downloaded Then it contains columns: listing_id, prediction_type, score, tier, computed_at, feedback_count, feedback_recency_days, comps_coverage_pct, comps_median_age_days, metadata_completeness_pct, model_agreement_pct, time_decay_factor, warning, using_priors, drivers, tips. Given a PDF export is generated When downloaded Then the same information is present and tier badge colors meet WCAG contrast ≥ 4.5:1.
Uncertainty Drivers Attribution
"As a listing agent, I want to know what is driving uncertainty so that I can focus my efforts on the factors that will tighten the bands."
Description

Identify and display top drivers of uncertainty per listing—such as conflicting buyer sentiment, high variance across recent comps, seasonal volatility, or missing room-level details—by applying dispersion-focused feature attribution (e.g., SHAP on predictive spread) or variance decomposition. Convert signals into human-readable labels with thresholds to suppress noise and log attributions for analytics. Outcome: clear explanations of why bands are wide so agents can act on the right levers.

Acceptance Criteria
Display Top Uncertainty Drivers
- Given a listing with band_width_pct >= 10, When the Confidence Bands panel renders, Then show a "What's driving uncertainty" section with 1-5 drivers sorted by descending contribution_pct. - Then each driver displays: human-readable label, contribution_pct (0-100, rounded to 1 decimal), and a tooltip explaining its signal source. - Then at least one driver is shown if any driver contribution_pct >= 5. - Given band_width_pct < 10 or no driver meets contribution_pct >= 5, When the panel renders, Then no drivers are shown and a "No significant uncertainty drivers" message is displayed.
Noise Suppression and Thresholding
- Rule: Suppress any driver with contribution_pct < 5. - Rule: Display at most 5 drivers; break ties by driver_key ascending. - Rule: "Conflicting buyer sentiment" only shows if n_feedback >= 8 and sentiment_iqr >= 0.6. - Rule: "High variance across recent comps" only shows if comps_count_last_60d >= 5 and price_sqft_std / long_term_std >= 1.2. - Rule: "Missing room-level details" only shows if missing_room_fields_count >= 2 OR avg_room_photos_per_room < 3. - Rule: "Seasonal volatility" only shows if dom_volatility_30d_percentile >= 95.
Data Sufficiency Score Display
- Given any listing, When the drivers section renders, Then display data_sufficiency_score in range 0-100 next to the section title. - Rule: Score = round(min(n_feedback/12,1)*40 + min(recent_comps/8,1)*30 + completeness_pct*20 + (market_seasonality_available?10:0)). - Given data_sufficiency_score < 40, Then show a "Low data sufficiency" badge and prioritize tips to increase feedback/comps. - Given data_sufficiency_score >= 80, Then do not show "insufficient data" warnings.
Actionable Tips Generation
- Given at least one driver is displayed, When tips are generated, Then for each driver provide at least one actionable tip with a quantified target (e.g., "Collect 5 more buyer feedbacks"). - Rule: Quantities are computed to achieve either a 20% reduction in band_width_pct or to raise data_sufficiency_score to >= 60, whichever is achieved first. - Given the user clicks a tip, Then navigate to the relevant workflow and emit analytics event tip_clicked with listing_id, driver_key, tip_id, quantity. - Given no actionable remediation exists for a driver, Then display "No immediate actions" for that driver and do not render a CTA.
Attribution Logging for Analytics
- Given an attribution is computed, Then emit analytics event uncertainty_attribution_generated with properties: listing_id, run_id, timestamp, model_version, feature_method, band_width_pct, data_sufficiency_score, drivers[], tips[], ui_rendered. - Rule: 99% of attribution computations result in a successfully ingested event within 5 minutes; PII fields are excluded by schema validation. - Rule: On transient failure, retry up to 3 times with exponential backoff; on final failure, log error to monitoring with severity=warning.
Recompute Triggers and Performance
- Given new buyer feedback is submitted, Then recompute attribution and update UI within 60 seconds (p95). - Given room-level details are updated or listing price changes, Then recompute within 60 seconds (p95). - Given nightly comps sync completes, Then recompute within 15 minutes (p95). - Rule: Server-side compute latency <= 300 ms p95 and <= 800 ms p99 per listing; memory p95 <= 200 MB; timeouts render fallback "Attribution unavailable" without drivers. - Rule: Cache per listing (TTL 10 minutes) is invalidated on any recompute trigger and includes model_version in the cache key.
Method Validity and Consistency
- Rule: Using SHAP on predictive spread, sum(driver contribution_pct) + residual_pct equals 100% ± 2% for 95% of listings; residual is labeled "Unexplained" and suppressed if < 3%. - Rule: Using variance decomposition, explained_variance_pct + residual_pct equals 100% ± 1%; all individual contributions are in [0%, 100%]. - Rule: For identical inputs and model_version, two runs produce identical driver labels and contributions within ±1 percentage point for 99% of listings. - Rule: Unit tests verify label mapping thresholds for at least these cases: conflicting buyer sentiment, high comps variance, seasonal volatility, missing room details.
Actionable Tightening Tips
"As an agent, I want concrete steps to tighten confidence bands so that I can prevent surprises and move the listing faster."
Description

Generate prioritized, context-specific recommendations that quantify expected impact on band width, such as “Collect 8 more QR feedback responses,” “Add 3 comps from the last 14 days within 0.5 miles,” “Tag missing room-level features,” or “Sync latest price change.” Link tips to in-product flows (feedback outreach, comp import, metadata edit), track completion, and recompute bands to show before/after impact. Outcome: prescriptive guidance that converts explanations into measurable improvements.

Acceptance Criteria
Tip Generation with Quantified Impact
- Given an active listing with confidence bands and recorded uncertainty drivers, When the user opens the Confidence Bands panel, Then the system generates between 3 and 8 context-specific tips. - Then each tip includes: action_label, target_metric, current_value, target_value, expected_band_width_reduction_abs (in listing currency units), expected_band_width_reduction_pct (0–100), confidence_pct (0–100), and assumption_notes. - And at least one tip addresses feedback count if feedback_count < target_feedback_count with a recommendation like “Collect X more QR feedback responses,” where X = target_feedback_count − feedback_count. - And at least one tip addresses comps if comps_recent_within_14d_0_5mi < 3, recommending the missing count up to 3. - And tips with expected_band_width_reduction_pct < 1% are suppressed and not displayed.
Prioritized Tip Ordering by Impact/Effort
- Given computed tips with expected_band_width_reduction_abs and effort_score (1–5), When rendering the list, Then tips are sorted by descending (expected_band_width_reduction_abs / effort_score). - And the top-ranked tip is labeled “Recommended” and visually distinguished. - And ties are broken by higher confidence_pct, then by more recent driver_updated_at. - And the computed sort_key and rank are included in a “tips_impression” telemetry event for all displayed tips.
Actionable Deep Links to Flows
- Given any displayed tip, When the user clicks its action, Then the app navigates to the correct in-product flow matching tip.action_type (feedback_outreach, comp_import, metadata_edit, price_sync) with listing_id preselected and relevant filters/fields prefilled. - And the deep link includes tip_id and returns navigation_success = true. - And a “tip_action_clicked” event is emitted within 200 ms with fields: listing_id, tip_id, action_type, expected_band_width_reduction_abs, expected_band_width_reduction_pct. - And on navigation failure, an error toast is shown and the tip remains in its prior state.
Tip Progress and Completion Tracking
- Given a tip with a measurable target, When partial progress occurs (e.g., 3 of 8 feedbacks collected), Then the tip displays a progress meter (0–100%) computed as min(actual/target, 1). - And When the target is reached or exceeded, Then the tip status auto-updates to Completed with completed_at timestamp, actor_user_id, and observed_delta values recorded. - And repeated completions for the same tip_id are idempotent and do not create duplicate records. - And “tip_progress_updated” and “tip_completed” telemetry events are logged with listing_id and tip_id.
Bands Recompute and Before/After Impact
- Given a tip is completed, When the underlying data change is persisted, Then the confidence bands recompute within 15 seconds. - And the UI displays a Before/After panel showing: previous_low/likely/high, new_low/likely/high, width_delta_abs, width_delta_pct, and sufficiency_score_delta. - And the completed tip shows realized_reduction vs expected; if realized < expected by >20%, a note explains the primary residual driver. - And recompute failures exceeding 15 seconds trigger one retry and a non-blocking warning banner.
Uncertainty Drivers, Sufficiency, and Tip Validity
- Given driver weights are available, When rendering each tip, Then the tip lists the targeted driver(s) with contribution_pct values and the sum of all drivers equals 100%. - And the sufficiency_score (0–100) and driver-specific gaps (e.g., feedback_gap, comps_gap) are shown with numeric targets. - And tips are suppressed if targeted driver_gap <= 0 or if a conflicting action is already in progress for the listing. - And each tip expires after 7 days or upon material state change; expired tips are auto-removed and replaced with fresh recommendations.
Comps & Feedback Ingestion Enhancements
"As a team admin, I want simple ways to add recent comps and capture feedback instantly so that confidence bands reflect the latest market signals."
Description

Provide MLS ID lookup, CSV upload, and API ingestion pipelines for recent comps with validation, deduplication, similarity scoring, and recency checks, plus real-time QR feedback capture with room-level sentiment tagging. Emit events to trigger immediate band recomputation after new data arrives. Outcome: fresher, higher-quality inputs that directly reduce uncertainty and tighten bands.

Acceptance Criteria
MLS ID Lookup with Validation and Recency Gate
- Given a valid MLS ID for a listing dated within the last 180 days, when an agent performs lookup, then the system returns normalized comp fields (address, beds, baths, sqft, list_price, sale_price, list_date, close_date) within 2 seconds. - Given a malformed or non-existent MLS ID, when lookup is attempted, then the system returns an error code in {MLS_ID_INVALID, MLS_NOT_FOUND} and does not create any comp record. - Rule: Listings older than 180 days are rejected with reason RECENCY_FAIL and are not added to the comp set. - Rule: Schema validation is enforced; if any required field is null (address, beds, baths, sqft, at least one of list_price/sale_price, and a date), the comp is rejected with reason SCHEMA_FAIL.
CSV Upload with Schema Validation and Deduplication Summary
- Given a CSV using the official template headers, when uploaded, then valid rows are ingested and invalid rows are rejected with per-row error reasons; a summary reports counts for processed, ingested, rejected, and duplicate rows. - Rule: Deduplication removes rows with duplicate MLS IDs or identical (address + close_date) both within the file and against existing records; duplicates are excluded and counted as DUPLICATE in the summary. - Rule: Files up to 5,000 rows complete processing in ≤ 30 seconds p95; extra columns are ignored; missing required headers cause file-level rejection with TEMPLATE_MISMATCH.
Comps API Ingestion with Auth, Idempotency, and Limits
- Given an authenticated POST /api/comps with JSON matching the schema, when submitted, then the API returns 202 Accepted and enqueues the record for ingestion. - Rule: Invalid payloads return 400 with field-level errors; unauthorized requests return 401; exceeded rate limits return 429 with retry-after. - Rule: Idempotency-Key header ensures repeat submissions do not create duplicates; subsequent identical requests return 200 with the existing resource id. - Rule: Recency, schema validation, and deduplication are applied identically to CSV and MLS paths; rejected items are retrievable via status endpoint with reasons. - Performance: API sustains 50 req/s with p95 latency < 200 ms for 202 responses under nominal load.
Similarity Scoring and Eligibility Tagging for Ingested Comps
- Given an accepted comp, when similarity scoring runs, then a score in [0.0, 1.0] is computed using distance, beds, baths, sqft delta, property type, year built, and sale date recency. - Rule: Comps with similarity ≥ 0.70 are marked ELIGIBLE; scores < 0.70 are marked LOW_SIMILARITY and excluded from band computation by default. - Rule: Top three feature contributions are stored and exposed as score_explain = [{feature, contribution}], enabling traceability. - Rule: Scoring is deterministic for identical inputs (repeat run difference = 0.0). - Rule: If required attributes for scoring are missing, the comp is flagged SCORING_INCOMPLETE and excluded from band computation.
Real-Time QR Feedback Capture with Room-Level Sentiment
- Given a buyer scans a property’s QR code and submits feedback with at least one room selected and a sentiment per room, when submitted, then the feedback is persisted and linked to the showing within 2 seconds and appears in the agent dashboard within 5 seconds. - Rule: Each selected room is tagged with sentiment in {positive, neutral, negative}; free-text comments are stored with basic profanity masking applied. - Rule: Submissions missing property_id, showing_id, or any selected room’s sentiment are rejected with 422 and field-level reasons. - Rule: Duplicate submissions from the same device for the same showing within 2 minutes are soft-deduplicated (marked POSSIBLE_DUPLICATE) and excluded from band computation by default.
Event Emission and Immediate Confidence Band Recomputation
- Given a comp or feedback is accepted via any ingestion path, when ingestion completes, then an event data.ingested is emitted within 1 second containing {resource_type, resource_id, property_id, change_summary}. - Rule: Upon event receipt, confidence bands (low/likely/high) and data_sufficiency_score are recomputed and persisted within 10 seconds; the UI reflects updates within 15 seconds. - Rule: If recomputation changes all bounds by < 0.5%, a “no material change” indicator is logged and shown to avoid user noise. - Rule: Failures emit data.recompute_failed with correlation_id and are retried 3 times with exponential backoff; no partial UI update is shown. - Rule: When accepted data increases eligible comps by ≥1 or adds ≥1 room-level feedback in the last 7 days, data_sufficiency_score increases by ≥1 point (0–100 scale) unless already at 100; uncertainty_drivers update accordingly.
Confidence Bands UI & Export
"As an agent, I want an intuitive view of confidence bands and explanations in both app and exports so that I can align sellers in meetings and follow-ups."
Description

Build responsive UI components to present low/likely/high ranges with color-coded sufficiency badges, tooltips for score and drivers, and historical trend charts. Ensure accessibility (keyboard, ARIA, high contrast), mobile readiness, and localization. Integrate into listing dashboards, seller share links, and PDF exports with consistent explanations and disclaimers. Outcome: clear, consistent presentation that educates sellers and supports negotiations.

Acceptance Criteria
Dashboard Confidence Bands: Ranges, Badges, Tooltips
- Given a listing with an available prediction, When the dashboard loads, Then Low, Likely, and High values render with locale-aware number formatting and unit labels. - Given a sufficiency score, When the component renders, Then a color-coded badge displays the label and score percentage with contrast >= 4.5:1 and a visible focus indicator. - Given user hover or keyboard focus on the badge or info icon, When the tooltip opens, Then it shows the numeric score (0–100%), the top 2–5 uncertainty drivers with direction/weight, and the model’s last-updated timestamp. - Given the tooltip is open, When Esc is pressed, focus moves away, or the user taps outside, Then the tooltip closes. - Given insufficient data (below the defined minimum signals), When the component renders, Then it displays a neutral badge and an inline tip to collect more feedback or add recent comps, and hides Low/Likely/High values. - Given standard copy requirements, When the component renders, Then the explanation and disclaimer text match the approved copy verbatim across Dashboard, Share, and PDF.
Accessibility: Keyboard, ARIA, High Contrast
- Given keyboard-only navigation, When traversing the component, Then all interactive elements are reachable in logical order and operable via Tab/Shift+Tab and Enter/Space; Esc dismisses tooltips/overlays. - Given screen reader usage, When elements receive focus, Then the badge announces “Data sufficiency: <Label>, <Percent>” and ranges announce “Low <X>, Likely <Y>, High <Z>” without relying on color. - Given tooltips and charts, When rendered, Then tooltips have role="tooltip" and are linked via aria-describedby; the chart exposes a text alternative/summary describing the band and latest values. - Given high-contrast mode, When enabled, Then all text contrast >= 4.5:1 (non-text >= 3:1), focus indicators are visible, and information is not conveyed by color alone. - Given WCAG 2.1 AA conformance, When audited (automated and manual), Then no blocking accessibility issues remain for this component.
Mobile Responsiveness and Touch Interactions
- Given a viewport width ≤ 375px, When rendering the component, Then layout stacks vertically without horizontal scrolling and text wraps without clipping/overlap. - Given touch input, When tapping the info icon or badge, Then the tooltip opens within 200 ms and closes on outside tap or system Back; it remains readable and positioned on screen. - Given small screens, When rendering the chart, Then axis labels are legible (≥ 12 pt), data points are tappable with minimum targets of 44×44 px, and tooltips are reachable without hover. - Given constrained networks, When loading on a 3G Fast profile, Then first meaningful paint for the component occurs ≤ 2.5 s and incremental payload for this feature ≤ 200 KB gzipped. - Given the page is scrolled, When the component enters/leaves the viewport, Then it does not trigger layout thrash or cause content jump (CLS < 0.1 attributable to this component).
Historical Trend Chart: Bands, Points, Empty States
- Given ≥ 4 historical prediction snapshots, When the chart renders, Then a shaded band shows Low–High and a line shows Likely for the selected period (default 8 weeks) with labeled axes. - Given hover, focus, or tap on any data point, When interaction occurs, Then a tooltip displays date, Low/Likely/High values, and sufficiency score for that period and is keyboard-accessible. - Given missing intervals, When gaps exist, Then the chart indicates them with breaks or dashed connectors and surfaces an inline message “Missing data for <date range>”. - Given no historical data, When rendering, Then the chart area shows “No trend data yet” and displays actionable tips to tighten bands (e.g., collect more feedback, add recent comps). - Given a time range selection, When changed, Then the chart updates within 300 ms and the selected range persists across Dashboard and Share views for the session.
Localization and Internationalization
- Given a locale set via user profile or URL param (lang), When rendering, Then all UI strings (including tooltips, tips, and disclaimers) appear in that language with localized dates and numbers. - Given an RTL locale (e.g., ar), When rendering, Then layout mirrors appropriately, text direction is right-to-left, and numeric/axis formatting follows locale while chart orientation remains readable. - Given a missing translation key, When rendering, Then English fallback is displayed and the missing key is logged for remediation. - Given pseudo-localization (+30% text expansion), When rendering, Then no text is clipped, overlapped, or truncated, and no layout breaks occur. - Given a user changes locale, When the setting is applied, Then the component updates without page reload and preserves current data state.
Seller Share Link Integration (Read-only)
- Given a valid seller share link, When opened by an unauthenticated user, Then the confidence bands render read-only with identical values/labels as the dashboard and no editing controls are present. - Given the shared view, When loaded, Then the explanation and legal disclaimer are visible above the fold on a 375×667 viewport and match dashboard copy verbatim. - Given link tampering attempts, When a different listing ID is injected, Then access is denied or redirected to a safe state; no data from other listings is exposed. - Given a locale parameter on the share link, When present, Then the shared view uses that locale; otherwise it uses the listing’s default locale. - Given analytics requirements, When the shared view renders, Then no PII is transmitted in query strings or referrers beyond public listing identifiers.
PDF Export Consistency and Clarity
- Given a listing with a prediction, When exporting to PDF, Then the document includes Low/Likely/High values, a sufficiency badge with score, top uncertainty drivers, and the legal disclaimer on the same page as the chart. - Given the PDF is generated, When viewed at 100%, Then body text is ≥ 9 pt, the chart is vector or ≥ 300 DPI raster, and distinctions remain legible in grayscale. - Given a non-English locale, When exporting, Then all strings and formats are localized and the PDF includes a generated timestamp and listing identifier in header or footer. - Given accessibility requirements, When exporting, Then the PDF is tagged for reading order and the chart has alt text describing the band and latest values. - Given file size constraints, When exporting a standard listing, Then the PDF is ≤ 1.5 MB without omitting required content.
Calibration & Monitoring
"As a product owner, I want ongoing calibration monitoring so that we maintain trust and avoid systematic under- or over-confidence."
Description

Continuously validate coverage of low/likely/high bands via backtests on closed listings, calibration plots, and segment-level metrics; detect drift and trigger alerts when under- or over-coverage appears. Support model versioning, A/B holds, and automatic recalibration routines with an internal health dashboard. Outcome: trustworthy bands that remain reliable as market conditions evolve.

Acceptance Criteria
Weekly Backtest Coverage Validation
- Given closed listings with realized target metrics and pre-close predictions, when the weekly backtest runs, then the realized value falls within the [low, high] band in 90% ± 3% of cases over the past 12 weeks. - Given the same runs, then the realized value falls within the Likely band in 70% ± 5% of cases over the past 12 weeks. - Given any backtest window, when the sample size per global metric calculation is < 500 listings, then the job marks the result as insufficient and does not update the global KPI. - Given listings used for training the current model, when the backtest runs, then those listings are excluded from evaluation. - Given the job completes, when metrics are computed, then results are written to persistent storage with model_version, data_window_start/end, and checksum.
Calibration Plot Generation & Persistence
- Given a completed weekly backtest, when calibration curves are generated (predicted vs. realized), then PNG and JSON artifacts are saved with a 180-day retention policy. - Given a model version, when a user opens the dashboard, then the latest calibration plot for that version loads in < 2 seconds at p95. - Given missing artifacts, when the dashboard is requested, then an explicit "No artifacts available" state is shown and a rebuild job can be triggered manually. - Given new artifacts are produced, when they are ingested, then they are tagged with model_version, cohort (A/B/holdout), and segment keys.
Segment-Level Coverage & Sufficiency Metrics
- Given predefined segments (price_tier, property_type, postal3, data_sufficiency_decile), when the backtest runs, then coverage metrics are computed per segment with N ≥ 50; otherwise the segment is marked insufficient. - Given segment metrics, when Likely band coverage deviates by > 7 percentage points from target (70%), then the segment is flagged as under/over-covered. - Given segment metrics, when [low, high] band coverage deviates by > 4 percentage points from target (90%), then the segment is flagged. - Given flagged segments, when the dashboard loads, then they are surfaced in a heatmap sorted by absolute deviation and exportable as CSV.
Drift Detection & Alerting
- Given weekly data, when population feature drift PSI > 0.2 for any top-20 feature or KS p-value < 0.01, then a Drift alert is emitted to #ml-alerts and email within 15 minutes of job completion. - Given calibration metrics, when global Likely coverage changes by ≥ 5 percentage points week-over-week or 3-week moving average crosses threshold bands, then a Calibration alert is emitted. - Given an alert, when it is emitted, then an incident ticket is created with model_version, segments implicated, and links to artifacts. - Given alert noise controls, when 3 consecutive weeks are already in open incident state, then duplicate alerts are suppressed and appended to the existing incident.
Model Versioning & A/B Holds
- Given prediction generation, when any prediction is stored, then it is tagged with model_version, model_hash, and cohort in {A, B, HOLDOUT} with stable user/listing assignment for ≥ 90 days. - Given the holdout cohort, when backtests run, then holdout data is excluded from any automatic recalibration training. - Given multiple model versions active, when the dashboard loads, then users can filter metrics by version and cohort and see side-by-side coverage deltas. - Given rollout configuration, when a new version is deployed, then at least 5% of traffic remains in HOLDOUT for baseline tracking.
Automatic Recalibration Routine
- Given global Likely coverage outside target for 2 consecutive backtests, when eligibility checks pass (N ≥ 2,000 in last 8 weeks), then an automatic recalibration job (isotonic or Platt) is triggered on validation-only data. - Given a candidate recalibration, when evaluated on an untouched test fold, then it must improve log-loss by ≥ 2% and keep [low, high] coverage within 90% ± 3% to be eligible for promotion. - Given promotion, when the new calibration mapping is activated, then the model_version is unchanged but the calib_version is incremented and rollout uses a 20% shadow for 48 hours before 100%. - Given degradation post-promotion (coverage misses by > 5 percentage points or log-loss worsens by > 2%), when detected within 7 days, then an automatic rollback to the prior calib_version occurs and an incident is filed.
Internal Health Dashboard & Access Control
- Given an authenticated internal user with the Analyst or Admin role, when the health dashboard is accessed, then global and segment coverage, drift status, alerts, version history, and last job run times are visible with data no older than 24 hours. - Given an unauthorized user, when access is attempted, then access is denied and the attempt is logged. - Given normal operations, when the dashboard is monitored over a rolling 30 days, then uptime is ≥ 99.5% and p95 page load time ≤ 3 seconds. - Given any metric tile, when clicked, then the underlying query and data sample (up to 1,000 rows) can be exported as CSV within 5 seconds at p95.

Drop Timing

Recommends the best day and hour to announce a price change based on local seasonality, weekend traffic, and recent listing attention. One tap schedules MLS updates and marketing pulses to maximize the visibility spike.

Requirements

Signal Aggregation Pipeline
"As a listing agent, I want Drop Timing to use local buyer activity and my listing’s recent attention so that the recommended announcement time reflects real market patterns, not guesswork."
Description

Build a data pipeline that ingests, normalizes, and unifies local seasonality signals (by ZIP/neighborhood), weekend and event-driven traffic patterns, and the listing’s recent attention (portal views/saves, showing requests, QR scan counts, inquiry volume). Include competitive listing events (nearby price drops/new actives), MLS update push/refresh windows, and historical performance (24 months backfill where available). Provide near-real-time updates (sub-hour) and daily rollups, with feature store outputs for the recommendation engine. Ensure compliance with MLS data usage, respect privacy settings, and support multi-MLS footprints. Expose health metrics and freshness SLAs within TourEcho’s analytics layer.

Acceptance Criteria
Sub-hour Freshness and Daily Rollups Delivered
Given the pipeline is operating normally When a new signal event is produced by any source Then the event is available in the feature store within 60 minutes of the source event timestamp Given the daily rollup window begins When the rollup job runs Then aggregates for the prior day complete by 02:00 local MLS timezone with job_status=success and freshness <= 24h Given ingestion lag exceeds 45 minutes for any source When detected by monitoring Then a freshness_lag_minutes metric records the breach and an alert is emitted within 5 minutes
Comprehensive Signal Ingestion Coverage
Given configured markets When ingestion runs Then connectors ingest: seasonality by ZIP and neighborhood, weekend and event-driven traffic, listing attention signals (portal views, saves, showing requests, QR scan counts, inquiry volume), competitive listing events (nearby price drops and new actives), MLS push and refresh windows, and historical archives Given historical availability per source When backfill executes Then up to 24 months are loaded with >= 98% completeness per source and zero unhandled failures Given daily source totals When validated Then record count parity is within ±1% of source per day or discrepancies are flagged in data_quality with source and window identifiers Given multiple MLSs When data is written Then records are partitioned by mls_id with no cross-MLS joins or leakage Given traffic spikes When volume reaches 2x baseline for 1 hour Then ingestion sustains throughput without data loss or SLA breach
Unified Schema, Normalization, and Provenance
Given events from heterogeneous sources When normalized Then outputs conform to canonical schema v1 with required fields: listing_id, mls_id, geo_scope, zip, neighborhood_id, event_type, event_ts_utc, original_timezone, source, value, units, privacy_level, processing_ts_utc, version Given timestamps across timezones When serialized Then all event timestamps are stored in UTC and original timezones retained Given write operations When idempotency keys are applied Then duplicate write rate is <= 0.1% per day and duplicates are soft-deleted on detection Given geospatial enrichment When zip or neighborhood cannot be resolved Then the record is quarantined with reason_code and appears in the data_quality dashboard within 15 minutes
Competitive Listing Events Integration
Given a subject listing When a nearby listing within 1 mile radius or same ZIP posts a price drop or becomes newly active Then a competitive_event is recorded with detection latency <= 60 minutes and linked to the subject listing via market_context Given duplicate competitive events from multiple feeds When processed Then only a single unified event exists per source_event_id per 24h window Given missing geocodes When proximity cannot be computed by coordinates Then the system falls back to ZIP-based proximity within the same MLS
MLS Compliance and Privacy Enforcement
Given MLS data usage rules and agent privacy settings When ingesting, storing, and serving features Then only permitted fields are stored; restricted fields are excluded or field-level encrypted; access controls enforce MLS membership and role-based permissions Given a listing with privacy opt-out enabled When processing attention signals Then signals for that listing are not persisted in the feature store and are excluded from downstream feature groups Given an audit inquiry When logs are requested Then access and write audit logs for any listing_id over the past 24 months are retrievable, including actor, time, action, and outcome Given data residency constraints per MLS agreement When persisting data Then data is stored within the agreed region with no cross-region replication of restricted datasets
Health Metrics, SLAs, and Observability in Analytics Layer
Given the analytics layer is accessed When viewing pipeline dashboards Then metrics are visible: ingestion_lag_minutes by source, freshness_minutes by feature, daily_rollup_status, record_counts_by_source, data_quality_failures, and backfill_progress with last_updated_ts Given SLA thresholds are breached (lag > 45 minutes, freshness > 60 minutes, data_quality_failure_rate > 1%) When detected Then alerts are sent within 5 minutes to on-call with run_id, source, and deep links to failing dashboards Given public health endpoints When calling GET /pipeline/sla and GET /pipeline/health Then responses include per-source freshness, last_success_ts, error_counts, and update at least every 5 minutes
Feature Store Outputs for Drop Timing Recommendation
Given the recommendation engine requests features When reading from the feature store Then the following feature groups exist with correct types and cadences: seasonality_features (daily), traffic_features (hourly), attention_features (sub-hourly), competitive_features (sub-hourly), mls_window_features (daily), historical_performance_features (daily) Given online and offline stores When validating parity Then feature definitions and values have skew <= 0.5% over a 24h window Given a new market is onboarded When enabled Then feature groups are populated within 24 hours and backfilled where source data exists Given schema evolution to v1.x+1 When deployed Then backward-compatible views maintain prior contracts and deprecations are announced in the catalog with effective dates
Time-Window Recommendation Engine
"As a listing agent, I want clear recommended days and hours with a confidence score so that I can pick the best moment to announce a price change."
Description

Create a scoring and ranking service that proposes optimal day/hour windows to announce a price change, balancing predicted visibility uplift against risk and constraints. Inputs include seasonality features, recent attention velocity, competitive activity, MLS refresh timing behavior, and audience engagement by channel (email, SMS, social). Output 3–5 recommended windows over the next 7 days with confidence scores, projected uplift, and sensitivity to alternative slots. Support cold-start heuristics when data is sparse and degrade gracefully. Provide APIs and UI components to surface recommendations in the listing dashboard within TourEcho.

Acceptance Criteria
Recommendation Output Contract
Given a listing with active status and a valid MLS ID When the engine is asked for price-change announcement windows for the next 7 days Then it returns between 3 and 5 unique day/hour windows within the next 7 days, none in the past, sorted by descending score And each window contains: windowId, startAt (ISO 8601 with timezone), durationHours=1, score (0–100), confidence (0–1), projectedUpliftPercent (>=0), rationaleTopSignals (array<=3) And the response contains: listingId, generatedAt (ISO 8601), modelVersion, timeZone And no two recommended windows overlap
Scoring and Ranking Determinism
Given a fixed input feature set and model version When the engine is called multiple times within 5 minutes Then it returns identical ordered recommendations and scores And if two candidate windows have equal scores, the earlier startAt ranks higher And all scores are monotonically non-increasing down the list
Sensitivity to Alternative Slots
Given a recommended window When sensitivity is calculated Then the response includes at least two adjacent alternative slots (prevHour, nextHour) with deltaProjectedUpliftPercent values And each deltaProjectedUpliftPercent reflects projected change relative to the primary window (negative or positive) And a sensitivityBandPercent field (min,max) is provided for the window
Cold-Start and Graceful Degradation
Given sparse data (fewer than 10 engagement events and no comparable comps within 5 miles in the last 14 days) When the engine is invoked Then it falls back to heuristics using local seasonality and MLS refresh timing And it returns at least 3 windows with confidence <= 0.40 and a dataSparsityWarning flag And external-service errors are not surfaced to the UI; non-blocking warnings are included in the payload
Input Utilization and Feature Attribution
Given available inputs (seasonality, attention velocity, competitive activity, MLS refresh timing behavior, channel engagement) When the engine computes scores Then each recommended window includes featureAttribution weights across the five input categories that sum to 1.0 (±0.01) And removing any single category in an A/B test changes at least one window’s score by >= 1 point on the 0–100 scale
API Contract, Performance, and Errors
Given the endpoint GET /v1/listings/{listingId}/price-change-windows When called with a valid authorized token and listingId Then it returns HTTP 200 with the defined response contract within 500 ms at p95 and achieves 99.9% availability monthly And requests with unknown listingId return 404; malformed parameters return 400; unauthorized requests return 401; forbidden access returns 403 And responses include ETag and support If-None-Match to return 304 when unchanged
UI Presentation in Listing Dashboard
Given a user viewing a listing in TourEcho When the Drop Timing card loads Then it displays the top 3 recommended windows with local day/date/time, confidence, and projected uplift percent And times are rendered in the listing’s local timezone and respect daylight saving transitions And the user can expand to see up to 5 windows, view adjacent alternative slots with deltas, and open an “Explain why” panel showing top signals And loading, empty, and degraded-state messages are shown appropriately and a manual refresh updates results within 2 seconds
One-Tap MLS + Marketing Scheduler
"As a broker-owner, I want to schedule the MLS update and coordinated marketing pulses in one tap so that my price change has maximum synchronized visibility."
Description

Enable a single action that schedules the MLS price change update and synchronized marketing pulses (agent email, SMS to interested buyers, social posts, portal refresh triggers) for the chosen recommended window. Integrate with MLS via RESO Web API/partner connectors, marketing channels, and TourEcho’s job orchestrator for timed execution, retries, and idempotency. Include templated content that pulls listing details and seller-approved messaging, with preview and preflight checks (credentials, permissions, blackout conflicts). Log all actions with audit trails and provide immediate rollback/cancellation before execution.

Acceptance Criteria
One-Tap Schedules MLS and Marketing Pulses
Given a listing has a Drop Timing recommendation window [windowStart, windowEnd] in the agent’s timezone and seller-approved messaging is available When the agent taps “Schedule Price Change” and confirms the preview Then the system schedules the MLS price update at a datetime scheduledAt ∈ [windowStart, windowEnd] And schedules agent email, SMS to interested buyers, social posts, and portal refresh triggers each at times ∈ [scheduledAt − 5 minutes, scheduledAt + 5 minutes] And returns a jobGroupId with per-channel jobIds And the confirmation UI displays scheduledAt in the agent’s timezone and the count of channels scheduled And no job is scheduled outside [windowStart, windowEnd] ± 5 minutes And channels not connected are excluded and listed as “Not scheduled” with reasons
Preflight Checks Block Scheduling
Given the agent opens the scheduler for a listing When preflight runs Then the system validates MLS credentials, MLS edit-price permission, channel connections, content merge, blackout conflicts, and overlapping job windows for the same listing within ±30 minutes And if any check fails, the Schedule button is disabled and a distinct error is shown per failing check with codes (e.g., MLS_AUTH_MISSING, PERMISSION_DENIED, CHANNEL_DISCONNECTED, CONTENT_INVALID, BLACKOUT_CONFLICT, WINDOW_OVERLAP) And no jobs are created on failure And resolving all errors re-enables the Schedule button without page reload
Templated Content Merge and Preview
Given channel templates contain placeholders like {{address}}, {{oldPrice}}, {{newPrice}}, {{mlsNumber}}, and seller-approved messaging When the agent opens Preview Then all placeholders resolve from the listing snapshot at scheduling time And SMS content length ≤ 320 characters, social post length ≤ 280 characters per network with required hashtags/handles inserted, and email subject length ≤ 78 characters And all links include UTM parameters utm_source, utm_medium, utm_campaign with campaignId = jobGroupId at execution time And HTML content is sanitized (no scripts/unsafe tags) and images resolve with valid URLs And the preview shows one render per channel with the exact text and media that will publish
Idempotent Duplicate Submission Handling
Given the schedule request includes an idempotencyKey derived from listingId, newPrice, windowStart, windowEnd, and channel set When the agent submits the same request multiple times within 10 minutes Then only one job group is created And subsequent submissions return 200 with the original jobGroupId And no duplicate per-channel jobs exist (count per channel = 1) And audit logs record the idempotencyKey on all related events
Execution-Time Retry and Backoff Policy
Given a scheduled job executes and a retriable error occurs (e.g., HTTP 5xx, timeout, MLS 429) When the orchestrator handles the failure Then it retries with exponential backoff starting at 30s, doubling up to a maximum of 5 attempts with jitter ±20% And on eventual success, only one publish/update occurs for that channel And on final failure, the job is marked Failed, a DLQ item is created, and the agent receives a notification within 1 minute And non-retriable errors (4xx auth/permission) are not retried and are surfaced immediately
Audit Trail Logging and Export
Given a schedule is created, executed, failed, canceled, or retried When the event occurs Then an audit entry is written with timestamp (UTC), actorId, listingId, jobGroupId, jobId, channel, action, statusBefore → statusAfter, payloadHash, externalReferenceId (e.g., MLS change id/post id/message id), and errorCode if any And audit entries are immutable and queryable by listingId, jobGroupId, channel, and date range And the agent can export a CSV for a listing that downloads within 30s for up to 10,000 rows and includes column headers and UTF-8 encoding
Rollback/Cancellation Before Execution
Given a jobGroup has future scheduled jobs and none have started When the agent clicks Cancel Schedule and confirms Then all pending jobs in the group are canceled within 5 seconds and marked Canceled And no channel publishes after cancellation And the UI reflects Canceled status and frees the recommended window for re-scheduling And an audit entry records the cancellation with reason And if some jobs have already executed, the system reports which channels executed and does not attempt deletion unless supported by the channel API, providing manual unpublish links where applicable
Smart Constraints & Compliance Guardrails
"As an agent, I want the system to respect MLS rules, quiet hours, and my seller’s preferences so that I avoid fines and maintain professionalism."
Description

Implement constraint handling to respect MLS board rules (announcement order, allowed hours), brokerage policies, seller approvals, do-not-contact lists, quiet hours, and regional holidays. Support agent-defined blackout windows, time zones, and preferred days. Validate scheduled drops against constraints at creation and just-in-time before execution, offering compliant alternatives automatically. Maintain a rules catalog per MLS and surface any violations with clear remediation steps within TourEcho.

Acceptance Criteria
Creation-Time MLS Hours and Announcement Order Validation
Given an agent schedules a price change for a listing under MLS X When the selected time is outside MLS X allowed announcement hours Then the system blocks scheduling, cites the specific MLS rule/code, and presents at least 3 compliant alternative time slots within the next 7 days Given marketing pulses are enabled When MLS order-of-announcement requires MLS posting before external marketing Then the system sequences the MLS update first and offsets pulses by the rule-defined minimum delay; if sequencing cannot comply, block scheduling and propose the earliest compliant sequence and times Given time and sequence are compliant When the agent confirms Then the schedule is saved and a confirmation displays the execution time in the listing’s local time zone and the agent’s local time
Just-in-Time Revalidation Before Execution
Given a drop is scheduled more than 15 minutes in advance When the just-in-time validation runs at T-15 minutes Then the system re-evaluates MLS rules, regional holidays, brokerage policies, seller approval status, quiet hours, blackout windows, preferred days, time zone, and do-not-contact constraints Given any constraint is violated at T-15 When the system evaluates alternatives Then execution is prevented; the drop is automatically rescheduled to the next compliant slot within 72 hours and the agent is notified with a change summary (old vs. new time and reason) Given no compliant slot exists within 72 hours When revalidation fails to reschedule Then the drop is set to Needs Attention, and the agent is notified with options to pick from suggested compliant windows or cancel Then all decisions and reason codes are appended to the audit log
Brokerage Policies and Seller Approval Enforcement
Given brokerage policy requires seller approval for price changes When no seller approval is on file at scheduling Then scheduling is blocked and the system launches an approval request flow (shareable link/email/SMS) and tracks status Given seller approval is captured When approval is recorded Then the schedule may proceed; the approval record stores seller identity, timestamp, and method, and is linked in the audit log Given a scheduled drop lacks approval at T-15 When just-in-time validation runs Then execution is prevented, the agent is notified, and the item remains pending until approval is received or the schedule is canceled
Do-Not-Contact Suppression for Marketing Pulses
Given marketing pulses include recipients sourced from contact lists When recipients are flagged on do-not-contact or channel-specific opt-out lists Then those recipients are excluded from sends, zero messages are delivered to them, and the UI displays the suppression count by channel Then a suppression summary and downloadable report are available for the pulse, and the system suggests at least one compliant alternative (e.g., different channel or narrower audience) Given all recipients are suppressed for a channel When execution time arrives Then that channel’s pulse is skipped with a clear notice and without delaying MLS updates or other compliant channels
Quiet Hours, Preferred Days, and Agent Blackout Windows
Given an agent has configured quiet hours, preferred days, and blackout windows When the agent attempts to schedule within quiet hours or a blackout window Then scheduling is disallowed and the system offers the next 3 compliant times falling on preferred days in the listing’s local time zone Given the agent wants to override preferred days When selecting a non-preferred day Then the system allows it only if quiet hours and blackout windows are still respected, requires a justification note, and records the override in the audit log Then the confirmation displays both MLS-local time and the agent’s local time to avoid ambiguity
Time Zone and Regional Holiday Handling
Given a listing is associated with MLS time zone Tz When scheduling or displaying execution times Then all times are stored in UTC and displayed in both Tz and the agent’s local time; user inputs are interpreted in Tz unless explicitly changed Given the scheduled time lands on a regional holiday or office closure for Tz When validating Then the system proposes the next business day within allowed MLS hours and explains the adjustment Given a DST transition in Tz When a time is ambiguous or non-existent Then ambiguous times are disallowed; non-existent times auto-shift to the next valid minute with an explicit notice in the confirmation
MLS Rules Catalog and Violation Remediation
Given an MLS board is selected for a listing When retrieving rules Then a per-MLS rules catalog is available with rule name, code, allowed hours, announcement order, effective version, and effective date via UI and API Given rules change for the MLS When a new version becomes effective Then the system alerts affected users, revalidates future schedules, and lists impacted items with recommended fixes Given a scheduling attempt violates a rule When presenting the error Then the UI displays the exact rule reference, violation reason, and one-tap remediation options (e.g., compliant time suggestions, sequence changes); applying a recommended fix updates the schedule and logs the decision
Why-Now Explainability & Alerts
"As an agent, I want to understand why a specific time is recommended and receive reminders so that I can justify the timing to my seller and not miss the window."
Description

Provide transparent reasoning for each recommended window, highlighting top drivers (e.g., high weekend foot traffic, competing price drops, rising QR scans) and expected impact. Show simple visuals of attention trends and seasonality overlays. Deliver alerts and reminders (web, mobile push, email) when a high-scoring window is approaching, and suggest fallback slots if a window is missed or conditions change. Allow agents to share a seller-facing summary directly from TourEcho.

Acceptance Criteria
Explain Reasoning for Recommended Window
Given a recommended price-drop window is displayed for a listing, When the agent opens the Why-Now panel, Then a plain-language rationale (≤120 words) explains why this window is recommended. And the rationale references at least two distinct data factors by name (e.g., weekend foot traffic, recent QR scans). And an expected impact range is shown as a % change in attention (e.g., +15–25%) with a stated confidence level (e.g., 80% CI). And the panel displays the model refresh timestamp in both the agent’s local time and UTC. And a “How it’s calculated” link opens documentation in a new tab.
Display Top Drivers with Metrics
Given the Why-Now panel is open, When the agent views the Top Drivers section, Then at least three drivers are listed with weight percentages that sum to 100% (±1% rounding tolerance). And each driver shows: metric name, current value, 30-day baseline, absolute delta, percent delta, and contribution score (0–100). And drivers are sorted by contribution score descending. And tooltips are available that define each metric and its calculation window.
Visualize Attention Trends and Seasonality
Given the listing has at least 14 days of attention data, When the agent views the attention chart, Then a line chart displays daily attention for the past 30 days with a 7-day moving average overlay. And a seasonality band for the local market (e.g., past 2-year median ± IQR) is overlaid with a legend. And hovering a data point shows date, raw value, 7DMA value, seasonality median, and band range. Given the listing has fewer than 14 days of data, When the agent views the chart, Then the chart displays available days and shows a “Limited history” notice.
Alert Users Before High-Scoring Window
Given a recommended window has a score ≥ 80/100 and the agent has at least one alert channel enabled, When the window is 24 hours from starting, Then an alert is sent via all enabled channels (in-app/web, mobile push, email) including: local start time, expected impact %, top two drivers, and action buttons for Schedule MLS Update, Snooze, and Dismiss. And When the window is 1 hour from starting and not yet scheduled, Then a second alert is sent with the same details. And When the agent taps Schedule MLS Update from any alert, Then a schedule is created and a confirmation is shown in-app and via the originating channel. And each alert includes a link to manage notification preferences.
Suggest Fallback Slots on Missed Window or Changed Conditions
Given a recommended window has passed without a scheduled update OR its score has dropped by ≥10 points due to new data, When the agent next opens the listing or within 60 minutes of the change (whichever comes first), Then up to three fallback slots within the next 7 days are presented with their scores, confidence badges, and one-sentence rationales. And selecting a fallback slot schedules the update and replaces any prior schedule. And the system logs the fallback recommendation, selection, and timestamp for auditing.
Share Seller-Facing Summary
Given the Why-Now panel is open, When the agent clicks Share with Seller, Then a secure share link and a downloadable PDF are generated containing: rationale summary, top drivers (max 5), attention and seasonality charts, proposed timing, and expected impact range. And the seller view excludes internal weights and raw source IDs, includes brokerage branding and listing details, and is view-only. And the share link expires after 14 days by default, with options to extend or revoke immediately. And email sharing uses a prefilled template and records delivery status and first-open timestamp.
Impact Measurement & Learning Loop
"As a broker-owner, I want to see how the chosen drop time affected attention and days-on-market so that I can refine strategy and coach my agents."
Description

Track the outcomes of each price-change drop across channels: subsequent listing views/saves, showing requests, QR scans, inquiry volume, and offer activity. Attribute uplift to the scheduled time window versus baseline patterns and comparable listings. Generate a post-drop report in TourEcho and feed labeled results back into the recommendation model for continuous improvement. Support optional A/B comparisons when multiple windows are tested across similar listings or markets.

Acceptance Criteria
Uplift Attribution vs Baseline (Single Price Drop)
Given a listing with ≥14 days of pre-drop data and a recorded drop timestamp T When the 24h, 72h, and 7d post-drop windows close Then the system computes per-channel deltas versus the pre-drop 14-day daily median for views, saves, showing requests, QR scans, inquiries, and offers And stores absolute and percentage uplift for each window and channel And computes 95% confidence intervals and marks significance where p < 0.05 And persists an attribution score for the scheduled time window versus matched control periods and comparables
Cross-Channel Event Capture & De-duplication
Given event feeds from MLS, portals, QR codes, website analytics, and CRM within [T-14d, T+7d] When events are ingested Then timestamps are normalized to UTC and property local timezone retained as a dimension And duplicate events are collapsed by composite key (source, external_id, listing_id, timestamp ±2 minutes) And the duplicate rate in a 1,000-event sample is <1% And unique per-channel and total counts are computed and stored And missing or delayed feeds (>30 minutes) are flagged and surfaced in the report
Post-Drop Report Generation & Access
Given the 7-day post-drop window completes or the user manually finalizes earlier When report generation is triggered Then a post-drop report is produced within 2 minutes And it includes: Overview summary, per-channel metrics, attribution vs baseline and comparables, time-series charts, significance markers, and an Experiments section if applicable And the report is accessible from the TourEcho listing dashboard and via a shareable URL And PDF and CSV exports are generated within 30 seconds and match on-screen figures within ±0.1%
Learning Loop: Outcome Labeling & Model Update
Given a post-drop report is finalized When the learning job runs Then a labeled training example is emitted containing features (market, seasonality, drop weekday/hour, weekend flag, baseline intensity, comp-set attributes) and labels (per-channel uplifts, attribution score, significance flags) And the example appears in the training dataset within 24 hours with lineage to listing and drop T And the next model version is trained within 24 hours of dataset update and versioned (e.g., vX.Y) with metrics logged (AUC/MAE) and accessible in model registry And the recommendation service begins serving the new version only after passing offline thresholds and a canary check (no regression >2% on key metrics)
A/B Test Setup and Analysis Across Similar Listings
Given a user defines an experiment with 2–3 time-window arms and a matching rule (market, property type, price band ±10%, DOM ±7 days) When price drops are scheduled and executed across enrolled listings Then listings are assigned to arms without overlap and with balance across key attributes And the analysis computes per-arm outcomes and difference-in-differences uplift with p-values And a winner is declared only if p < 0.05 and minimum sample size ≥10 listings per arm; otherwise status = Inconclusive And experiment results link to each listing’s post-drop report and can be exported
Baseline & Comparable Selection Rules
Given a listing prepared for attribution When baseline is computed Then it uses the rolling 14-day pre-drop daily median per metric, excluding the 24 hours before T and major holidays And users can optionally switch to 7- or 30-day windows with results recalculated within 60 seconds When comparables are selected Then comps meet filters (same MLS area/neighborhood, property type, price band ±10%, beds ±1, DOM ±14, active within 30 days) And k ∈ [5,20]; if k < 5, fallback to broader geography or market aggregates and mark Low Confidence And the final comp set is displayed and persisted for audit
Edge Cases, Confounders, and Timezone Handling
Given co-occurring marketing events (e.g., email blast, open house, new media) logged within the measurement window When such events are detected Then the report flags Confounder Present and either excludes overlapping hours from attribution or downgrades confidence per rule And if the public price-update timestamp differs from scheduled T by >30 minutes, the measurement window re-bases to the actual public timestamp And all timestamps are stored in UTC and displayed in the property’s local timezone with correct DST handling And if a channel feed is missing, the metric is marked N/A, excluded from roll-ups, and a Report Completeness score is shown

Product Ideas

Innovative concepts that could enhance this product's value proposition.

Objection Auto-Tasks

Convert top objections into assigned tasks with cost ranges, vendor suggestions, and due dates, so teams move from talk to action fast.

Idea

Quiet Hours Shield

Auto-enforce office quiet hours and travel buffers; block after-hours bookings and auto-suggest next slots, preventing burnout and double-booking.

Idea

Room Snap Feedback

Let buyer agents attach room photos and 1–5 emoji ratings during scans; AI redacts faces and tags issues, giving agents visual proof and prioritized fixes.

Idea

QR Trust Stamp

Verify visitor identity via MLS ID or one-time SMS and sign each submission, surfacing 'Verified Agent' badges and filtering spam without accounts.

Idea

Broker SSO Bridge

Add SAML/OIDC login with role-based defaults and SCIM provisioning, giving brokerages one-click onboarding, automatic deprovisioning, and cleaner audit trails.

Idea

Price Pulse Simulator

Model expected days-on-market change for proposed price drops using live sentiment and comps; preview 'drop 2% → +30% showings' before committing.

Idea

Press Coverage

Imagined press coverage for this groundbreaking product concept.

P

TourEcho Unveils Trusted QR Feedback and Visual Evidence Suite to Turn Showings into Action in Minutes

Imagined Press Article

Austin, TX — TourEcho today announced the Trusted QR Feedback and Visual Evidence Suite, a tightly integrated set of capabilities that transforms at-the-door impressions into verified, privacy-safe insights and instantly actionable tasks. Designed for listing agents, coordinators, and broker-owners, the new suite pairs secure identity verification with guided, on-device photo capture and AI-powered analysis so teams can move from feedback to fixes in a single workflow. Agents tell us the problem is no longer a lack of feedback—it’s the noise, uncertainty, and manual effort to make sense of it. Text threads pile up, details get lost, and sellers want proof,” said Maya Chen, co-founder and CEO of TourEcho. “With our trusted QR and visual stack, buyer agents can share structured room-level impressions in seconds, while listing teams receive verified, high-signal data that auto-converts into next steps. It’s faster, clearer, and defensible.” What’s new and how it works - Guided Snaps provides step-by-step camera prompts that detect room type and suggest the most useful angles. Framing overlays help visitors capture consistent, high-signal visuals without training. - Redact Shield performs real-time, on-device redaction to mask faces, family photos, documents, and other sensitive items the instant a photo is taken, minimizing compliance risk and building seller trust. - Offline Capture allows photos and emoji ratings to be captured even in dead zones and crowded open houses. Submissions are time-stamped and sync as soon as connectivity resumes. - MLS Roster Match and SMS Trust Link verify the identity of visitors in seconds—no app or account required. Verified Agent badges appear automatically and flow through to all reports. - Proof Seal cryptographically signs each QR submission with time, listing, and verified identity, preventing tampering and ensuring every export holds up to internal and external scrutiny. - Risk Guard and Policy Profiles synthesize signals like duplicate numbers or mismatched roster data to adaptively step up verification, while admins configure policies by listing, team, or event type. - Issue Heatmap aggregates tags and sentiment across all visits into a room-by-room view of top pain points, weighted by predicted impact on buyer interest. - Snap‑to‑Task converts any tagged issue in a photo directly into an actionable task with location, scope, and pre-matched vendors. - Before/After Proof aligns new snaps with prior angles to confirm that fixes addressed the original objections and to update sentiment and impact scores. Crucially, Impact Rank and ROI Gauge are embedded throughout the flow, estimating the effect of each objection and recommended fix on days-on-market and showing lift. Coordinators and agents can justify spend with confidence, while sellers get clear visuals and outcomes they can agree on quickly. The new suite meets the realities of busy showings,” said Jordan Alvarez, a team listing coordinator who manages 30–40 active listings at a multi-market brokerage. “We replaced unstructured texts and guesswork with verified, consistent feedback and tasks we can assign in one click. Our sellers finally see why something matters—and what we’re doing about it—without the back-and-forth.” Buyer agents benefit from a frictionless experience. Visitors scan a QR code on a door hanger, confirm identity via MLS match or a one-tap SMS link, and submit quick, structured room-level impressions with optional photos. “I can log the top three wins and issues in under a minute while it’s fresh,” said Priya S., a buyer agent in Denver. “The process respects my time and my client’s privacy, and I know my feedback is valued because it’s visible and acted on.” For broker-owners and compliance leaders, the auditability is a game-changer. Badge Everywhere surfaces Verified Agent status across schedules, notifications, seller portals, and exports. Trust Analytics provides portfolio-level dashboards tracking verified rates, MLS coverage, and the downstream impact on response quality and conversion. “In our coaching sessions, we can now point to verified signals and room-level heatmaps, not anecdotes,” said Marcus L., broker-owner and performance coach. “It’s elevated our pricing and staging conversations and helped us shave days off market without over-cutting.” Why it matters now - Agents report TourEcho cuts follow-up time by up to 70% by replacing scattered texts with clear, actionable readouts. - Early adopters using the trusted visual stack report tighter seller alignment and faster approvals on budget and scope. - Brokerages gain defensible audit trails and policy controls without adding friction to buyer agents. Availability and packaging The Trusted QR Feedback and Visual Evidence Suite is available today for all TourEcho customers globally. Verification features (MLS Roster Match, SMS Trust Link, Proof Seal, Badge Everywhere, Risk Guard, Policy Profiles, Trust Analytics) are included in Pro and Enterprise plans; visual capture and action capabilities (Guided Snaps, Redact Shield, Offline Capture, Issue Heatmap, Snap‑to‑Task, Before/After Proof) are included across all plans, with select advanced analytics available to Enterprise. Existing customers will see features roll out automatically over the coming weeks; admins can configure policy defaults and vendor directories on day one. About TourEcho TourEcho is a lightweight showing-management platform for residential listing teams. It schedules showings, captures at-the-door QR feedback, and AI-summarizes sentiment and room-level objections. Agents replace scattered texts with clear, actionable readouts that cut follow-up time and shave days off market, while broker-owners gain roll-up analytics to coach pricing and staging decisions across portfolios. Press and analyst contact - Media: press@tourecho.com - Partnerships: partners@tourecho.com - Website: www.tourecho.com - Phone: +1 (512) 555-0174 All product names and features referenced are available or rolling out as described. Timing and packaging are subject to change.

P

TourEcho Launches Quiet Scheduling Guardrails to Protect Agent Time Without Losing a Single Showing

Imagined Press Article

Austin, TX — TourEcho today introduced Quiet Scheduling Guardrails, a comprehensive set of policies and automations that protect agent wellness and eliminate double-booking while preserving booking velocity and seller satisfaction. Building on TourEcho’s smart scheduling engine, the new guardrails bring together Quiet Profiles, Smart Buffers, Polite Redirect, Team Rebalance, Override Escalation, and Calendar Lock to keep calendars humane, realistic, and conflict-free. Real estate is a relationship business, but sustained performance requires sustainable schedules,” said Maya Chen, co-founder and CEO of TourEcho. “Our customers asked for a way to maintain clear boundaries without sacrificing responsiveness. Quiet Scheduling Guardrails make respect for time a feature, not a favor—so agents stay sharp, sellers stay informed, and buyers get quick, confident answers.” What’s included and how it works - Quiet Profiles: Create layered quiet-hour policies by office, team, agent, listing, and day type (weekday/weekend/holiday). Compliance admins set organizational defaults, while agents opt into personal windows within bounds. - Smart Buffers: Automatically calculate realistic travel and parking time between showings using distance and traffic patterns, preventing back-to-back slots that lead to lateness and last-minute apologies. - Polite Redirect: When requests land during quiet hours, TourEcho sends a branded, courteous response with one-tap alternative times and a waitlist, keeping momentum and clarity without after-hours back-and-forth. - Team Rebalance: If a requested time conflicts with an agent’s quiet hours or travel buffers, TourEcho auto-routes the showing to on-duty teammates or showing assistants who meet coverage rules—preserving bookings and service levels. - Override Escalation: For rare exceptions, a single tap triggers a time-boxed override with captured reason, auto-routed approval, and a clean audit trail, preventing policy erosion. - Calendar Lock: TourEcho writes quiet-hour holds and travel blocks to Google, Outlook, and iCloud, and reads external conflicts back in, creating a single source of truth across tools. These capabilities work in concert to reduce stress and errors while raising the quality of every interaction. For solo agents, it’s an instant relief valve. For teams and brokerages, it’s a system: humane guardrails, continuous coverage, and fewer surprises. Before, I lived in a fog of apologies and reshuffles,” said Ari R., a solo listing agent covering five active listings. “Smart Buffers and Polite Redirect changed my week one day at a time. I stopped racing between showings, my on-time rate jumped, and buyers actually thanked me for the clarity.” Operations leaders see an equal payoff. “Quiet Profiles and Calendar Lock finally gave us consistent, organization-wide norms we can defend,” said Dana W., an office compliance admin in the Northeast. “We prevent after-hours requests from slipping through, capture approved exceptions with reasons, and keep clean audit trails for our records. It’s policy with heart.” Broker-owners also highlight the business impact. “By eliminating unrealistic scheduling and routing requests to the right teammate when it matters, we reduced no-shows and protected our sellers’ availability,” said Samantha K., broker-owner of a 70-agent firm. “Pair that with TourEcho’s at-the-door sentiment summaries, and we’re making faster, higher-confidence decisions on pricing and staging while keeping our people fresh.” Why it matters now - The industry is balancing high client expectations with heightened focus on wellness and retention. - Back-to-back bookings and late-night texting lead to burnout, missed opportunities, and lower conversion. - Buyers still expect immediate clarity—Guardrails meet the moment with structured, courteous automation and documented exceptions. Built for the whole TourEcho workflow Quiet Scheduling Guardrails operate in context with the rest of the platform. When a buyer agent scans a QR code and submits feedback, listing teams receive AI-summarized sentiment along with a reliable schedule that respects buffers and quiet hours. If objections surface, Impact Rank and ROI Gauge suggest the fastest, highest-leverage fixes. When those fixes require vendors, Vendor Match, Objection Playbooks, Task Chains, and Seller Progress keep everyone aligned—without asking agents to sacrifice evenings or slog through manual rescheduling. Availability and configuration Quiet Scheduling Guardrails are available today to all TourEcho customers. Quiet Profiles, Smart Buffers, and Calendar Lock are included across plans; Polite Redirect, Team Rebalance, and Override Escalation are included in Pro and Enterprise tiers. Admins can import organization calendars on day one, set regional quiet-hour defaults, and allow agents to choose personal windows within policy. For teams with varied coverage, Team Rebalance can be enabled with office-level routing rules; beta support for third-party showing assistants is underway. About TourEcho TourEcho is a lightweight showing-management platform built for residential listing teams. It schedules showings, captures verified, at-the-door QR feedback, and AI-summarizes buyer sentiment and room-level objections. Agents replace scattered texts with clear readouts that cut follow-up time by up to 70% and shave a median six days off market, while broker-owners gain portfolio analytics to coach pricing and staging. Press and analyst contact - Media: press@tourecho.com - Partnerships: partners@tourecho.com - Website: www.tourecho.com - Phone: +1 (512) 555-0174 Timing and packaging are subject to change.

P

TourEcho Introduces Pricing Intelligence to Pin the Sweet Spot and Prove the Why Behind Every Move

Imagined Press Article

Austin, TX — TourEcho today unveiled Pricing Intelligence, a decision suite that blends live showing sentiment with transparent comps and predictive models to help listing teams move with confidence—not guesswork. With Elasticity Curve, Scenario Compare, Smart Comps, Fix vs Drop, Confidence Bands, and Drop Timing, agents, coordinators, and broker-owners can see the likely impact of price moves and minor fixes before they act, then communicate the rationale to sellers in plain language. Pricing is the lever that most affects time-to-offer—and the one most often pulled in the dark,” said Maya Chen, co-founder and CEO of TourEcho. “Our customers wanted a way to model both price and action: ‘What if we fix lighting and hold price?’ ‘What if we refresh carpet and drop 1.5%?’ Pricing Intelligence answers those questions with clear, trustworthy forecasts grounded in the feedback buyers are actually giving at the door.” What’s inside and how it helps - Elasticity Curve: An interactive, listing-specific graph projects showing lift, days-on-market reduction, and offer likelihood at each 0.5–5% price change. Users can pin a recommended sweet spot to guide next steps. - Scenario Compare: Save multiple price-move scenarios (e.g., −1%, −2.5%, −5%) and compare side-by-side KPIs, then share a seller-ready link or PDF with plain-language rationale. - Smart Comps: Auto-curated comps are transparently weighted by proximity, recency, bed/bath, style, and live showing sentiment. Agents can pin or unpin comps and see forecasts recalc instantly with clear why. - Fix vs Drop: Toggle top objections—like carpet or lighting—as “resolved” to simulate the combined effect of minor fixes alongside or instead of a price move. See which path delivers more showings and faster offers for less. - Confidence Bands: Every prediction displays low/likely/high ranges with a data sufficiency score and what’s driving uncertainty, plus tips to tighten bands (collect more feedback, add recent comps). - Drop Timing: Recommend the best day and hour to announce a price change based on seasonality, weekend traffic, and recent listing attention. One tap schedules MLS updates and marketing pulses to maximize visibility. For listing coordinators, the suite creates a clean workflow from insight to action. Objection summaries from TourEcho’s QR feedback flow directly into Fix vs Drop. If a small fix beats a big price cut, coordinators can spin up tasks via Objection Playbooks and Task Chains, route to the right agents, and present Seller Progress with clear costs and timelines. Vendor Match helps teams book vetted pros in a click, while Impact Rank and ROI Gauge quantify expected payoff. “This helps us coach sellers earlier and with evidence,” said Parker M., an institutional asset manager overseeing dozens of listings. “We can show how addressing two top objections plus a measured price move outperforms a blunt 5% cut. It keeps asset plans disciplined and outcomes faster.” Sellers appreciate transparency and choice. “We could finally see the trade-offs,” said Elena T., a homeowner who used TourEcho during a two-week pricing review. “The data made it easy to approve the quick fixes that mattered and avoid overcorrecting price. Our showings picked up within days.” Operations and leadership teams value the defensibility and alignment. “The combo of Smart Comps and Confidence Bands is huge,” said Ivy N., a brokerage data leader. “Our agents explain not just the what, but the why and the uncertainty. When bands are wide, the system tells us how to tighten them—collect more verified feedback, add fresher comps—and we do.” Why it matters now - In a market where every day counts, price moves are costly and public. Getting them right builds credibility and momentum; getting them wrong erodes both. - Buyers are telling you what matters in the home—TourEcho captures that in the moment and turns it into quantified guidance for pricing and prioritization. - Transparent, adjustable models with clear ranges beat black boxes and gut checks, especially when communicating with sellers and leadership. Availability and packaging Pricing Intelligence is available today for TourEcho Pro and Enterprise customers. Elasticity Curve, Scenario Compare, and Smart Comps are included in Pro; Fix vs Drop, Confidence Bands, and Drop Timing are available in Pro with advanced capabilities in Enterprise, including portfolio roll-ups for broker-owners and asset managers. Existing TourEcho users will see Pricing Intelligence appear in their listing dashboard with historical sentiment pre-loaded where available. About TourEcho TourEcho is the lightweight showing-management platform that turns at-the-door QR feedback into clear, AI-summarized insights and action. Listing teams use TourEcho to schedule smarter, capture verified room-level impressions, prioritize fixes, and collaborate with sellers—cutting follow-up time by up to 70% and shaving a median six days off market. Press and analyst contact - Media: press@tourecho.com - Partnerships: partners@tourecho.com - Website: www.tourecho.com - Phone: +1 (512) 555-0174 Product details, packaging, and timelines are subject to change.

P

TourEcho Debuts Broker SSO Bridge with Zero-Touch Provisioning and Tamper-Evident Audit Trails

Imagined Press Article

Austin, TX — TourEcho today launched Broker SSO Bridge, an enterprise-grade identity and administration layer that brings one-click onboarding, automatic deprovisioning, and defensible audit trails to brokerages of any size. With IdP Templates, Role Blueprints, SCIM AutoSync, JIT Join, Group Mirror, Audit Ledger, and Breakglass Keys, operations and security teams can deploy TourEcho across offices in minutes—not weeks—while keeping least-privilege access and clean rosters by default. Adoption rises when access is simple and safe,” said Maya Chen, co-founder and CEO of TourEcho. “Broker SSO Bridge lets leaders roll out TourEcho confidently across new markets and teams, knowing roles, rosters, and logs will match their standards on day one.” What’s included and how it streamlines ops - IdP Templates: Prebuilt SAML/OIDC configurations for Okta, Azure AD, Google, and OneLogin come with a guided wizard, copy-paste claim maps, and one-click metadata exchange, cutting setup from hours to minutes. - Role Blueprints: Opinionated, least-privilege role bundles map to TourEcho personas—Agent, Listing Coordinator, Broker-Owner, Compliance Admin—so new users land with the right permissions by default. - SCIM AutoSync: Continuous provisioning creates, updates, or deactivates users the moment they change in the IdP. Titles, managers, offices, and license counts mirror source of truth, reclaiming seats automatically on departure and preserving audit history. - JIT Join: For pilots or smaller offices without SCIM, just-in-time user creation on first SSO places teammates into default roles from Role Blueprints with optional invite approval—zero tickets required. - Group Mirror: Map IdP groups to TourEcho teams, offices, and listing scopes. When someone moves groups, access, notification settings, and assignment queues update automatically, eliminating drift. - Audit Ledger: Tamper-evident logs capture sign-ins, role grants, SCIM events, and admin actions with exports to SIEM/CSV and retention policies, supporting audits and swift resolution of disputes. - Breakglass Keys: Time-boxed emergency access keeps showings and seller visibility running during IdP outages, requiring approver sign-off and logging every action for post-incident review. For expansion leaders, the effect is immediate. “We used Broker SSO Bridge to standardize our rollout playbook across four regions,” said Ethan R., a franchise operations lead. “Provisioning is no longer a project—it’s automatic. We can prove adoption with clean portfolio metrics and focus training time on workflows, not logins.” Data and security teams gain confidence in controls. “Role Blueprints and Group Mirror let us enforce least privilege and clean separation by office,” said Ivy N., a brokerage data specialist. “Audit Ledger gives us the who, what, and when—plus exports for our SIEM—so compliance reviews are straightforward.” For admins on the front lines, risk and rework drop. “With SCIM AutoSync and JIT Join, we stopped chasing seats and permissions,” said Dana W., a compliance admin. “Departures deprovision automatically, pilots spin up safely, and when we need to break glass during an outage, the system documents every step. It’s the best of both control and continuity.” Tight integration with the TourEcho workflow Broker SSO Bridge underpins the rest of TourEcho. Verified QR feedback, Issue Heatmaps, and Pricing Intelligence rely on clean identities and scoped access. When an agent moves offices, Group Mirror shifts their listing scopes and notification preferences automatically. As teams grow or restructure, SCIM keeps rosters accurate in real time, and Audit Ledger anchors every change with a durable, exportable record. Why it matters now - Brokerages need secure, scalable systems that keep up with hiring, turnover, and expansion without bogging down agents or IT. - Least-privilege defaults and tamper-evident logs reduce risk while speeding audits and vendor reviews. - Zero-touch provisioning pairs speed with consistency, letting leaders spend time on coaching and client outcomes instead of access tickets. Availability and packaging Broker SSO Bridge is available today for TourEcho Enterprise customers, with IdP Templates and Role Blueprints included at no additional cost. SCIM AutoSync, Group Mirror, and Audit Ledger are part of the Enterprise administration add-on; JIT Join is available in Pro and Enterprise, and Breakglass Keys is included in Enterprise. A guided deployment pack with change-management checklists and sandbox credentials is available for new rollouts at no charge. About TourEcho TourEcho is the lightweight showing-management platform that turns at-the-door QR scans into verified, AI-summarized sentiment and room-level objections. Teams replace scattered texts with clear readouts, convert feedback into tasks, and coach pricing with confidence—cutting follow-up time and shaving days off market. Press and analyst contact - Media: press@tourecho.com - Partnerships: partners@tourecho.com - Website: www.tourecho.com - Phone: +1 (512) 555-0174 Features, packaging, and timelines are subject to change.

Want More Amazing Product Ideas?

Subscribe to receive a fresh, AI-generated product idea in your inbox every day. It's completely free, and you might just discover your next big thing!

Product team collaborating

Transform ideas into products

Full.CX effortlessly brings product visions to life.

This product was entirely generated using our AI and advanced algorithms. When you upgrade, you'll gain access to detailed product requirements, user personas, and feature specifications just like what you see below.